Repository logo
 

Future word contexts in neural network language models

Accepted version
Peer-reviewed

Type

Article

Change log

Authors

Chen, X 
Liu, X 
Ragni, A 
Wang, Y 
Gales, MJF 

Abstract

Recently, bidirectional recurrent network language models (bi-RNNLMs) have been shown to outperform standard, unidirectional, recurrent neural network language models (uni-RNNLMs) on a range of speech recognition tasks. This indicates that future word context information beyond the word history can be useful. However, bi-RNNLMs pose a number of challenges as they make use of the complete previous and future word context information. This impacts both training efficiency and their use within a lattice rescoring framework. In this paper these issues are addressed by proposing a novel neural network structure, succeeding word RNNLMs (su-RNNLMs). Instead of using a recurrent unit to capture the complete future word contexts, a feedforward unit is used to model a finite number of succeeding, future, words. This model can be trained much more efficiently than bi-RNNLMs and can also be used for lattice rescoring. Experimental results on a meeting transcription task (AMI) show the proposed model consistently outperformed uni-RNNLMs and yield only a slight degradation compared to bi-RNNLMs in N-best rescoring. Additionally, performance improvements can be obtained using lattice rescoring and subsequent confusion network decoding.

Description

Keywords

Bidirectional recurrent neural network, language model, succeeding words, speech recognition

Journal Title

2017 IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2017 - Proceedings

Conference Name

Journal ISSN

Volume Title

2018-January

Publisher

IEEE
Sponsorship
Cambridge Assessment (unknown)