Multi-Modal Representations for Improved Bilingual Lexicon Learning
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics
The 54th Annual Meeting of the Association for Computational Linguistics
Association for Computational Linguistics
MetadataShow full item record
Vulić, I., Kiela, D., Clark, S., & Moens, M. (2016). Multi-Modal Representations for Improved Bilingual Lexicon Learning. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 188-194. http://acl2016.org/
Recent work has revealed the potential of using visual representations for bilingual lexicon learning (BLL). Such image-based BLL methods, however, still fall short of linguistic approaches. In this paper, we propose a simple yet effective multimodal approach that learns bilingual semantic representations that fuse linguistic and visual input. These new bilingual multi-modal embeddings display significant performance gains in the BLL task for three language pairs on two benchmarking test sets, outperforming linguistic-only BLL models using three different types of state-of-the-art bilingual word embeddings, as well as visual-only BLL models.
This work is supported by ERC Consolidator Grant LEXICAL (648909) and KU Leuven Grant PDMK/14/117. SC is supported by ERC Starting Grant DisCoTex (306920).
External link: http://acl2016.org/
This record's URL: https://www.repository.cam.ac.uk/handle/1810/267177