Auxiliary Objectives for Neural Error Detection Models
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Rei, M
Giannakoudaki, E
Abstract
We investigate the utility of different auxiliary objectives and training strategies within a neural sequence labeling approach to error detection in learner writing. Auxiliary costs provide the model with additional linguistic information, allowing it to learn general-purpose compositional features that can then be exploited for other objectives. Our experiments show that a joint learning approach trained with parallel labels on in-domain data improves performance over the previous best error detection system. While the resulting model has the same number of parameters, the additional objectives allow it to be optimised more efficiently and achieve better performance.
Description
Keywords
Journal Title
Conference Name
Workshop on Innovative Use of NLP for Building Educational Applications
Journal ISSN
Volume Title
Publisher
Publisher DOI
Sponsorship
Cambridge Assessment (unknown)