Comparing Data Sources and Architectures for Deep Visual Representation Learning in Semantics
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
Empirical Methods in Natural Language Processing Conference (EMNLP 2016)
Association for Computational Linguistics
MetadataShow full item record
Kiela, D., Vero, A., & Clark, S. (2016). Comparing Data Sources and Architectures for Deep Visual Representation Learning in Semantics. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, D16 (1043), 447-456. https://doi.org/10.18653/v1/D16-1043
Multi-modal distributional models learn grounded representations for improved performance in semantics. Deep visual representations, learned using convolutional neural networks, have been shown to achieve particularly high performance. In this study, we systematically compare deep visual representation learning techniques, experimenting with three well-known network architectures. In addition, we explore the various data sources that can be used for retrieving relevant images, showing that images from search engines perform as well as, or better than, those from manually crafted resources such as ImageNet. Furthermore, we explore the optimal number of images and the multi-lingual applicability of multi-modal semantics. We hope that these findings can serve as a guide for future research in the field.
Anita Verõ is supported by the Nuance Foundation Grant: Learning Type-Driven Distributed Representations of Language. Stephen Clark is supported by the ERC Starting Grant: DisCoTex (306920).
External DOI: https://doi.org/10.18653/v1/D16-1043
This record's URL: https://www.repository.cam.ac.uk/handle/1810/263697