XFlow: Cross-Modal Deep Neural Networks for Audiovisual Classification.
View / Open Files
Publication Date
2019-11-08Journal Title
IEEE Trans Neural Netw Learn Syst
ISSN
2162-237X
Publisher
IEEE
Language
eng
Type
Article
This Version
AM
Metadata
Show full item recordCitation
Cangea, C., Velickovic, P., & Lio, P. (2019). XFlow: Cross-Modal Deep Neural Networks for Audiovisual Classification.. IEEE Trans Neural Netw Learn Syst https://doi.org/10.1109/TNNLS.2019.2945992
Abstract
In recent years, there have been numerous developments toward solving multimodal tasks, aiming to learn a stronger representation than through a single modality. Certain aspects of the data can be particularly useful in this case--for example, correlations in the space or time domain across modalities--but should be wisely exploited in order to benefit from their full predictive potential. We propose two deep learning architectures with multimodal cross connections that allow for dataflow between several feature extractors (XFlow). Our models derive more interpretable features and achieve better performances than models that do not exchange representations, usefully exploiting correlations between audio and visual data, which have a different dimensionality and are nontrivially exchangeable. This article improves on the existing multimodal deep learning algorithms in two essential ways: 1) it presents a novel method for performing cross modality (before features are learned from individual modalities) and 2) extends the previously proposed cross connections that only transfer information between the streams that process compatible data. Illustrating some of the representations learned by the connections, we analyze their contribution to the increase in discrimination ability and reveal their compatibility with a lip-reading network intermediate representation. We provide the research community with Digits, a new data set consisting of three data types extracted from videos of people saying the digits 0-9. Results show that both cross-modal architectures outperform their baselines (by up to 11.5%) when evaluated on the AVletters, CUAVE, and Digits data sets, achieving the state-of-the-art results.
Sponsorship
NERC (2221169)
European Commission Horizon 2020 (H2020) Industrial Leadership (IL) (634821)
Identifiers
External DOI: https://doi.org/10.1109/TNNLS.2019.2945992
This record's URL: https://www.repository.cam.ac.uk/handle/1810/287785
Rights
Licence:
http://www.rioxx.net/licenses/all-rights-reserved
Statistics
Total file downloads (since January 2020). For more information on metrics see the
IRUS guide.
Recommended or similar items
The current recommendation prototype on the Apollo Repository will be turned off on 03 February 2023. Although the pilot has been fruitful for both parties, the service provider IKVA is focusing on horizon scanning products and so the recommender service can no longer be supported. We recognise the importance of recommender services in supporting research discovery and are evaluating offerings from other service providers. If you would like to offer feedback on this decision please contact us on: support@repository.cam.ac.uk