Automatic Transcription of Multi-genre Media Archives
Bell, P. J.
Gales, M. J. F.
Seigel, M. S.
Woodland, P. C.
CEUR Workshop Proceedings
MetadataShow full item record
Lanchantin, P., Bell, P. J., Gales, M. J. F., Hain, T., Liu, X., Long, Y., Quinnell, J., et al. (2013). Automatic Transcription of Multi-genre Media Archives. CEUR Workshop Proceedings, 1012 26-31.
This paper was presented at the First Workshop on Speech, Language and Audio in Multimedia, August 22-23, 2013; Marseille. It was published in CEUR Workshop Proceedings at http://ceur-ws.org/Vol-1012/.
This paper describes some recent results of our collaborative work on developing a speech recognition system for the automatic transcription or media archives from the British Broadcasting Corporation (BBC). The material includes a wide diversity of shows with their associated metadata. The latter are highly diverse in terms of completeness, reliability and accuracy. First, we investigate how to improve lightly supervised acoustic training, when timestamp information is inaccurate and when speech deviates significantly from the transcription, and how to perform evaluations when no reference transcripts are available. An automatic timestamp correction method as well as a word and segment level combination approaches between the lightly supervised transcripts and the original programme scripts are presented which yield improved metadata. Experimental results show that systems trained using the improved metadata consistently outperform those trained with only the original lightly supervised decoding hypotheses. Secondly, we show that the recognition task may benefit from systems trained on a combination of in-domain and out-of-domain data. Working with tandem HMMs, we describe Multi-level Adaptive Networks, a novel technique for incorporating information from out-of domain posterior features using deep neural network. We show that it provides a substantial reduction in WER over other systems including a PLP-based baseline, in-domain tandem features, and the best out-of-domain tandem features.
This research was supported by EPSRC Programme Grant EP/I031022/1 (Natural Speech Technology).
This record's URL: http://www.dspace.cam.ac.uk/handle/1810/244726
Attribution-NonCommercial 2.0 UK: England & Wales
Licence URL: http://creativecommons.org/licenses/by-nc/2.0/uk/
The following licence files are associated with this item: