First the worst: Finding better gender translations during beam search
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
Generating machine translations via beam search seeks the most likely output under a model. However, beam search has been shown to amplify demographic biases exhibited by a model. We aim to address this, focusing on gender bias resulting from systematic errors in grammatical gender translation. Almost all prior work on this problem adjusts the training data or the model itself. By contrast, our ap- proach changes only the inference procedure. We explore two techniques: constraining beam search to improve gender diversity in n-best lists, and reranking n-best lists using gender features obtained from the source sentence. Combining these strongly improves WinoMT gender translation accuracy for three language pairs without additional bilingual data or re- training. We also demonstrate our approach’s utility for consistently gendering named enti- ties, and its flexibility to handle new gendered language beyond the binary.