Sequence classification with human attention
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Barrett, M
Bingel, J
Hollenstein, N
Rei, M
Søgaard, A
Abstract
Learning attention functions requires large volumes of data, but many NLP tasks simulate human behavior, and in this paper, we show that human attention really does provide a good inductive bias on many attention functions in NLP. Specifically, we use estimated human attention derived from eye-tracking corpora to regularize attention functions in recurrent neural networks. We show substantial improvements across a range of tasks, including sentiment analysis, grammatical error detection, and detection of abusive language.
Description
Keywords
Journal Title
CoNLL 2018 - 22nd Conference on Computational Natural Language Learning, Proceedings
Conference Name
Proceedings of the 22nd Conference on Computational Natural Language Learning
Journal ISSN
Volume Title
Publisher
Association for Computational Linguistics
Publisher DOI
Sponsorship
Cambridge Assessment (unknown)