Repository logo
 

Automatic transcription of multi-genre media archives


Loading...
Thumbnail Image

Type

Conference Object

Change log

Authors

Lanchantin, P 
Bell, PJ 
Gales, MJF 
Hain, T 
Liu, X 

Abstract

This paper describes some recent results of our collaborative work on developing a speech recognition system for the automatic transcription or media archives from the British Broadcasting Corporation (BBC). The material includes a wide diversity of shows with their associated metadata. The latter are highly diverse in terms of completeness, reliability and accuracy. First, we investigate how to improve lightly supervised acoustic training, when timestamp information is inaccurate and when speech deviates significantly from the transcription, and how to perform evaluations when no reference transcripts are available. An automatic timestamp correction method as well as a word and segment level combination approaches between the lightly supervised transcripts and the original programme scripts are presented which yield improved metadata. Experimental results show that systems trained using the improved metadata consistently outperform those trained with only the original lightly supervised decoding hypotheses. Secondly, we show that the recognition task may benefit from systems trained on a combination of in-domain and out-of-domain data. Working with tandem HMMs, we describe Multi-level Adaptive Networks, a novel technique for incorporating information from out-of domain posterior features using deep neural network. We show that it provides a substantial reduction in WER over other systems including a PLP-based baseline, in-domain tandem features, and the best out-of-domain tandem features.

Description

Keywords

Journal Title

CEUR Workshop Proceedings

Conference Name

Journal ISSN

1613-0073

Volume Title

1012

Publisher

Publisher DOI

Publisher URL

Sponsorship
This research was supported by EPSRC Programme Grant EP/I031022/1 (Natural Speech Technology).