Repository logo
 

Multi-Modal Representations for Improved Bilingual Lexicon Learning

Published version
Peer-reviewed

Type

Conference Object

Change log

Authors

Vulić, I 
Kiela, D 
Clark, S 
Moens, MF 

Abstract

Recent work has revealed the potential of using visual representations for bilingual lexicon learning (BLL). Such image-based BLL methods, however, still fall short of linguistic approaches. In this paper, we propose a simple yet effective multimodal approach that learns bilingual semantic representations that fuse linguistic and visual input. These new bilingual multi-modal embeddings display significant performance gains in the BLL task for three language pairs on two benchmarking test sets, outperforming linguistic-only BLL models using three different types of state-of-the-art bilingual word embeddings, as well as visual-only BLL models.

Description

Keywords

Journal Title

Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics

Conference Name

The 54th Annual Meeting of the Association for Computational Linguistics

Journal ISSN

Volume Title

Publisher

Association for Computational Linguistics
Sponsorship
This work is supported by ERC Consolidator Grant LEXICAL (648909) and KU Leuven Grant PDMK/14/117. SC is supported by ERC Starting Grant DisCoTex (306920).