Auxiliary Objectives for Neural Error Detection Models
View / Open Files
Publication Date
2017-09-08Conference Name
Workshop on Innovative Use of NLP for Building Educational Applications
Language
English
Type
Conference Object
This Version
AM
Metadata
Show full item recordCitation
Rei, M., & Giannakoudaki, H. Y. (2017). Auxiliary Objectives for Neural Error Detection Models. Workshop on Innovative Use of NLP for Building Educational Applications. https://doi.org/10.17863/CAM.21370
Abstract
We investigate the utility of different auxiliary
objectives and training strategies
within a neural sequence labeling approach
to error detection in learner writing.
Auxiliary costs provide the model
with additional linguistic information, allowing
it to learn general-purpose compositional
features that can then be exploited
for other objectives. Our experiments
show that a joint learning approach
trained with parallel labels on in-domain
data improves performance over the previous
best error detection system. While
the resulting model has the same number
of parameters, the additional objectives allow
it to be optimised more efficiently and
achieve better performance.
Sponsorship
Cambridge Assessment (unknown)
Identifiers
This record's DOI: https://doi.org/10.17863/CAM.21370
This record's URL: https://www.repository.cam.ac.uk/handle/1810/294967
Rights
Licence:
http://www.rioxx.net/licenses/all-rights-reserved