Tracking cortical entrainment in neural activity: Auditory processes in human temporal cortex
Frontiers in Computational Neuroscience
MetadataShow full item record
Thwaites, A., Nimmo-Smith, I., Fonteneau, E., Patterson, R., Buttery, P., & Marslen-Wilson, W. (2015). Tracking cortical entrainment in neural activity: Auditory processes in human temporal cortex. Frontiers in Computational Neuroscience, 9 https://doi.org/10.3389/fncom.2015.00005
A primary objective for cognitive neuroscience is to identify how features of the sensory environment are encoded in neural activity. Current auditory models of loudness perception can be used to make detailed predictions about the neural activity of the cortex as an individual listens to speech. We used two such models (loudness-sones and loudness-phons), varying in their psychophysiological realism, to predict the instantaneous loudness contours produced by 480 isolated words. These two sets of 480 contours were used to search for electrophysiological evidence of loudness processing in whole-brain recordings of electro- and magneto-encephalographic (EMEG) activity, recorded while subjects listened to the words. The technique identified a bilateral sequence of loudness processes, predicted by the more realistic loudness-sones model, that begin in auditory cortex at ~80 ms and subsequently reappear, tracking progressively down the superior temporal sulcus (STS) at lags from 230 to 330 ms. The technique was then extended to search for regions sensitive to the fundamental frequency (F0) of the voiced parts of the speech. It identified a bilateral F0 process in auditory cortex at a lag of ~90 ms, which was not followed by activity in STS. The results suggest that loudness information is being used to guide the analysis of the speech stream as it proceeds beyond auditory cortex down STS toward the temporal pole.
neural computation, magnetoencephalography, MNE source space, speech envelope, fundamental frequency contour, information encoding, model expression
This work was supported by an EPSRC grant to William D. Marslen-Wilson and Paula Buttery (EP/F030061/1), an ERC Advanced Grant (Neurolex) to William D. Marslen-Wilson, and by MRC Cognition and Brain Sciences Unit (CBU) funding to William D. Marslen-Wilson (U.1055.04.002.00001.01). Computing resources were provided by the MRC-CBU and the University of Cambridge High Performance Computing Service (http://www.hpc.cam.ac.uk/). Andrew Liu and Phil Woodland helped with the HTK speech recogniser and Russell Thompson with the Matlab code. We thank Asaf Bachrach, Cai Wingfield, Isma Zulfiqar, Alex Woolgar, Jonathan Peelle, Li Su, Caroline Whiting, Olaf Hauk, Matt Davis, Niko Kriegeskorte, Paul Wright, Lorraine Tyler, Rhodri Cusack, Brian Moore, Brian Glasberg, Rik Henson, Howard Bowman, Hideki Kawahara, and Matti Stenroos for invaluable support and suggestions.
European Research Council (230570)
External DOI: https://doi.org/10.3389/fncom.2015.00005
This record's URL: https://www.repository.cam.ac.uk/handle/1810/247226
Attribution 2.0 UK: England & Wales
Licence URL: http://creativecommons.org/licenses/by/2.0/uk/
Recommended or similar items
The following licence files are associated with this item: