Repository logo
 

Visually Grounded and Textual Semantic Models Differentially Decode Brain Activity Associated with Concrete and Abstract Nouns

Published version
Peer-reviewed

Type

Article

Change log

Authors

Anderson, AJ 
Kiela, D 
Clark, SC 
Poesio, M 

Abstract

Important advances have recently been made using computational semantic models to decode brain activity patterns associated with concepts; however, this work has almost exclusively focused on concrete nouns. How well these models extend to decoding abstract nouns is largely unknown. We address this question by applying state-of-the-art computational models to decode functional Magnetic Resonance Imaging (fMRI) activity patterns, elicited by participants reading and imagining a diverse set of both concrete and abstract nouns. One of the models we use is linguistic, exploiting the recent word2vec skipgram approach trained on Wikipedia. The second is visually grounded, using deep convolutional neural networks trained on Google Images. Dual coding theory considers concrete concepts to be encoded in the brain both linguistically and visually, and abstract concepts only linguistically. Splitting the fMRI data according to human concreteness ratings, we indeed observe that both models significantly decode the most concrete nouns; however, accuracy is significantly greater using the text-based models for the most abstract nouns. More generally this confirms that current computational models are sufficiently advanced to assist in investigating the representational structure of abstract concepts in the brain.

Description

Keywords

Journal Title

Transactions of the Association for Computational Linguistics

Conference Name

Journal ISSN

Volume Title

5

Publisher

Association for Computational Linguistics

Publisher DOI

Sponsorship
Stephen Clark is supported by ERC Starting Grant DisCoTex (306920).