Neural Sequence-Labelling Models for Grammatical Error Correction
Editors
Martha, P
Rebecca, H
Sebastian, R
Publication Date
2017-09-30Journal Title
Proceedings of the 2017 Conference on Empirical Methods in natural Language Processing
Conference Name
Conference on Empirical Methods in Natural Language Processing
ISBN
9781945626838
Publisher
Association for Computational Linguistics
Volume
D17-1
Number
D17-1297
Pages
2795-2806
Language
English
Type
Conference Object
This Version
AM
Metadata
Show full item recordCitation
Yannakoudakis, H., Rei, M., Andersen, O., & Yuan, Z. (2017). Neural Sequence-Labelling Models for Grammatical Error Correction. Proceedings of the 2017 Conference on Empirical Methods in natural Language Processing, D17-1 (D17-1297), 2795-2806. https://doi.org/10.18653/v1/D17-1297
Abstract
We propose an approach to N-best list reranking
using neural sequence-labelling
models. We train a compositional model
for error detection that calculates the probability
of each token in a sentence being
correct or incorrect, utilising the full sentence
as context. Using the error detection
model, we then re-rank the N best
hypotheses generated by statistical machine
translation systems. Our approach
achieves state-of-the-art results on error
correction for three different datasets, and
it has the additional advantage of only using
a small set of easily computed features
that require no linguistic input.
Sponsorship
Cambridge Assessment (unknown)
Embargo Lift Date
2100-01-01
Identifiers
External DOI: https://doi.org/10.18653/v1/D17-1297
This record's URL: https://www.repository.cam.ac.uk/handle/1810/295717
Statistics
Total file downloads (since January 2020). For more information on metrics see the
IRUS guide.
Recommended or similar items
The following licence files are associated with this item: