Repository logo
 

Improving lightly supervised training for broadcast transcription


Type

Article

Change log

Authors

Long, Y 
Gales, MJF 
Lanchantin, P 
Liu, X 
Seigel, MS 

Abstract

This paper investigates improving lightly supervised acoustic model training for an archive of broadcast data. Standard lightly supervised training uses automatically derived decoding hypotheses using a biased language model. However, as the actual speech can deviate significantly from the original programme scripts that are supplied, the quality of standard lightly supervised hypotheses can be poor. To address this issue, word and segment level combination approaches are used between the lightly supervised transcripts and the original programme scripts which yield improved transcriptions. Experimental results show that systems trained using these improved transcriptions consistently outperform those trained using only the original lightly supervised decoding hypotheses. This is shown to be the case for both the maximum likelihood and minimum phone error trained systems.

Description

Keywords

lightly supervised training, speech recognition, confidence scores

Journal Title

Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH

Conference Name

Journal ISSN

2308-457X
1990-9772

Volume Title

Publisher

ISCA
Sponsorship
The research leading to these results was supported by EPSRC Programme Grant EP/I031022/1 (Natural Speech Technology).