Repository logo
 

A deep hierarchy of predictions enables online meaning extraction in a computational model of human speech comprehension.

Published version
Peer-reviewed

Type

Article

Change log

Authors

MacGregor, Lucy J 
Olasagasti, Itsaso 
Giraud, Anne-Lise 

Abstract

Understanding speech requires mapping fleeting and often ambiguous soundwaves to meaning. While humans are known to exploit their capacity to contextualize to facilitate this process, how internal knowledge is deployed online remains an open question. Here, we present a model that extracts multiple levels of information from continuous speech online. The model applies linguistic and nonlinguistic knowledge to speech processing, by periodically generating top-down predictions and incorporating bottom-up incoming evidence in a nested temporal hierarchy. We show that a nonlinguistic context level provides semantic predictions informed by sensory inputs, which are crucial for disambiguating among multiple meanings of the same word. The explicit knowledge hierarchy of the model enables a more holistic account of the neurophysiological responses to speech compared to using lexical predictions generated by a neural network language model (GPT-2). We also show that hierarchical predictions reduce peripheral processing via minimizing uncertainty and prediction error. With this proof-of-concept model, we demonstrate that the deployment of hierarchical predictions is a possible strategy for the brain to dynamically utilize structured knowledge and make sense of the speech input.

Description

Keywords

Humans, Comprehension, Speech, Speech Perception, Brain, Language

Journal Title

PLoS Biol

Conference Name

Journal ISSN

1544-9173
1545-7885

Volume Title

21

Publisher

Public Library of Science (PLoS)
Sponsorship
MRC (MC_UU_00030/6)