Repository logo
 

Vision and Feature Norms: Improving automatic feature norm learning through cross-modal maps

Published version
Peer-reviewed

Loading...
Thumbnail Image

Type

Conference Object

Change log

Authors

Bulat, L 
Kiela, D 
Clark, S 

Abstract

Property norms have the potential to aid a wide range of semantic tasks, provided that they can be obtained for large numbers of concepts. Recent work has focused on text as the main source of information for automatic property extraction. In this paper we examine property norm prediction from visual, rather than textual, data, using cross-modal maps learnt between property norm and visual spaces. We also investigate the importance of having a complete feature norm dataset, for both training and testing. Finally, we evaluate how these datasets and cross-modal maps can be used in an image retrieval task.

Description

Keywords

Journal Title

Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Conference Name

2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Journal ISSN

Volume Title

Publisher

Association for Computational Linguistics
Sponsorship
LB is supported by an EPSRC Doctoral Training Grant. DK is supported by EPSRC grant EP/I037512/1. SC is supported by ERC Starting Grant DisCoTex (306920) and EPSRC grant EP/I037512/1.