Neural Sequence-Labelling Models for Grammatical Error Correction
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Giannakoudaki, E
Rei, M
Andersen, OE
Yuan, Zheng
Abstract
We propose an approach to N-best list reranking using neural sequence-labelling models. We train a compositional model for error detection that calculates the probability of each token in a sentence being correct or incorrect, utilising the full sentence as context. Using the error detection model, we then re-rank the N best hypotheses generated by statistical machine translation systems. Our approach achieves state-of-the-art results on error correction for three different datasets, and it has the additional advantage of only using a small set of easily computed features that require no linguistic input.
Description
Keywords
Journal Title
Proceedings of the 2017 Conference on Empirical Methods in natural Language Processing
Conference Name
Conference on Empirical Methods in Natural Language Processing
Journal ISSN
Volume Title
D17-1
Publisher
Association for Computational Linguistics
Publisher DOI
Sponsorship
Cambridge Assessment (unknown)