An Operation Sequence Model for Explainable Neural Machine Translation
View / Open Files
Journal Title
EMNLP BlackboxNLP workshop 2018
Conference Name
EMNLP BlackboxNLP workshop 2018
Type
Conference Object
Metadata
Show full item recordCitation
Stahlberg, F., Saunders, D., & Byrne, W. (2018). An Operation Sequence Model for Explainable Neural Machine Translation. EMNLP BlackboxNLP workshop 2018 https://doi.org/10.17863/CAM.30849
Abstract
We propose to achieve explainable neural machine translation (NMT) by
changing the output representation to explain itself. We present a novel
approach to NMT which generates the target sentence by monotonically walking
through the source sentence. Word reordering is modeled by operations which
allow setting markers in the target sentence and move a target-side write head
between those markers. In contrast to many modern neural models, our system
emits explicit word alignment information which is often crucial to practical
machine translation as it improves explainability. Our technique can outperform
a plain text system in terms of BLEU score under the recent Transformer
architecture on Japanese-English and Portuguese-English, and is within 0.5 BLEU
difference on Spanish-English.
Sponsorship
EPSRC (1632937)
Engineering and Physical Sciences Research Council (EP/L027623/1)
Identifiers
External DOI: https://doi.org/10.17863/CAM.30849
This record's URL: https://www.repository.cam.ac.uk/handle/1810/283483
Rights
All rights reserved
Licence:
http://www.rioxx.net/licenses/all-rights-reserved
Statistics
Total file downloads (since January 2020). For more information on metrics see the
IRUS guide.