Show simple item record

dc.contributor.authorThwaites, Andrewen
dc.contributor.authorNimmo-Smith, Ianen
dc.contributor.authorFonteneau, Elisabethen
dc.contributor.authorPatterson, Royen
dc.contributor.authorButtery, Paulaen
dc.contributor.authorMarslen-Wilson, Williamen
dc.identifier.citationFrontiers in Computational Neuroscience, 9:5, 10 February 2015 | doi: 10.3389/fncom.2015.00005en
dc.description.abstractA primary objective for cognitive neuroscience is to identify how features of the sensory environment are encoded in neural activity. Current auditory models of loudness perception can be used to make detailed predictions about the neural activity of the cortex as an individual listens to speech. We used two such models (loudness-sones and loudness-phons), varying in their psychophysiological realism, to predict the instantaneous loudness contours produced by 480 isolated words. These two sets of 480 contours were used to search for electrophysiological evidence of loudness processing in whole-brain recordings of electro- and magneto-encephalographic (EMEG) activity, recorded while subjects listened to the words. The technique identified a bilateral sequence of loudness processes, predicted by the more realistic loudness-sones model, that begin in auditory cortex at ~80 ms and subsequently reappear, tracking progressively down the superior temporal sulcus (STS) at lags from 230 to 330 ms. The technique was then extended to search for regions sensitive to the fundamental frequency (F0) of the voiced parts of the speech. It identified a bilateral F0 process in auditory cortex at a lag of ~90 ms, which was not followed by activity in STS. The results suggest that loudness information is being used to guide the analysis of the speech stream as it proceeds beyond auditory cortex down STS toward the temporal pole.
dc.description.sponsorshipThis work was supported by an EPSRC grant to William D. Marslen-Wilson and Paula Buttery (EP/F030061/1), an ERC Advanced Grant (Neurolex) to William D. Marslen-Wilson, and by MRC Cognition and Brain Sciences Unit (CBU) funding to William D. Marslen-Wilson (U.1055.04.002.00001.01). Computing resources were provided by the MRC-CBU and the University of Cambridge High Performance Computing Service ( Andrew Liu and Phil Woodland helped with the HTK speech recogniser and Russell Thompson with the Matlab code. We thank Asaf Bachrach, Cai Wingfield, Isma Zulfiqar, Alex Woolgar, Jonathan Peelle, Li Su, Caroline Whiting, Olaf Hauk, Matt Davis, Niko Kriegeskorte, Paul Wright, Lorraine Tyler, Rhodri Cusack, Brian Moore, Brian Glasberg, Rik Henson, Howard Bowman, Hideki Kawahara, and Matti Stenroos for invaluable support and suggestions.
dc.rightsAttribution 2.0 UK: England & Wales*
dc.subjectneural computationen
dc.subjectMNE source spaceen
dc.subjectspeech envelopeen
dc.subjectfundamental frequency contouren
dc.subjectinformation encodingen
dc.subjectmodel expressionen
dc.titleTracking cortical entrainment in neural activity: Auditory processes in human temporal cortexen
dc.description.versionThis is the final published version. The article was originally published in Frontiers in Computational Neuroscience, 10 February 2015 | doi: 10.3389/fncom.2015.00005en
prism.publicationNameFrontiers in Computational Neuroscienceen
dc.contributor.orcidThwaites, Andrew [0000-0002-6237-7140]
dc.contributor.orcidMarslen-Wilson, William [0000-0003-0690-6308]
rioxxterms.typeJournal Article/Reviewen
pubs.funder-project-idEPSRC (EP/F030061/1)
pubs.funder-project-idEuropean Research Council (230570)
pubs.funder-project-idMRC (MC_U105580454)

Files in this item


This item appears in the following Collection(s)

Show simple item record

Attribution 2.0 UK: England & Wales
Except where otherwise noted, this item's licence is described as Attribution 2.0 UK: England & Wales