Multi-Modal Representations for Improved Bilingual Lexicon Learning
View / Open Files
Authors
Vulić, I
Kiela, D
Clark, Stephen
Moens, MF
Publication Date
2016-08-13Journal Title
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics
Conference Name
The 54th Annual Meeting of the Association for Computational Linguistics
ISBN
9781510827592
Publisher
Association for Computational Linguistics
Pages
188-194
Type
Conference Object
This Version
VoR
Metadata
Show full item recordCitation
Vulić, I., Kiela, D., Clark, S., & Moens, M. (2016). Multi-Modal Representations for Improved Bilingual Lexicon Learning. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 188-194. https://doi.org/10.18653/v1/p16-2031
Abstract
Recent work has revealed the potential of using visual representations for bilingual lexicon learning (BLL). Such image-based BLL methods, however, still fall short of linguistic approaches. In this paper, we propose a simple yet effective multimodal approach that learns bilingual semantic representations that fuse linguistic and visual input. These new bilingual multi-modal embeddings display significant performance gains in the BLL task for three language pairs on two benchmarking test sets, outperforming linguistic-only BLL models using three different types of state-of-the-art bilingual word embeddings, as well as visual-only BLL models.
Sponsorship
This work is supported by ERC Consolidator Grant LEXICAL (648909) and KU Leuven Grant PDMK/14/117. SC is supported by ERC Starting Grant DisCoTex (306920).
Identifiers
External DOI: https://doi.org/10.18653/v1/p16-2031
This record's URL: https://www.repository.cam.ac.uk/handle/1810/267177
Rights
Licence:
http://www.rioxx.net/licenses/all-rights-reserved
Statistics
Total file downloads (since January 2020). For more information on metrics see the
IRUS guide.