Show simple item record

dc.contributor.authorWang, Yu
dc.contributor.authorChen, X
dc.contributor.authorGales, Mark
dc.contributor.authorRagni, Anton
dc.contributor.authorWong, JHM
dc.date.accessioned2018-11-01T14:04:18Z
dc.date.available2018-11-01T14:04:18Z
dc.date.issued2018-09-10
dc.identifier.isbn9781538646588
dc.identifier.issn1520-6149
dc.identifier.urihttps://www.repository.cam.ac.uk/handle/1810/284543
dc.description.abstractState-of-the-art English automatic speech recognition systems typically use phonetic rather than graphemic lexicons. Graphemic systems are known to perform less well for English as the mapping from the written form to the spoken form is complicated. However, in recent years the representational power of deep-learning based acoustic models has improved, raising interest in graphemic acoustic models for English, due to the simplicity of generating the lexicon. In this paper, phonetic and graphemic models are compared for an English Multi-Genre Broadcast transcription task. A range of acoustic models based on lattice-free MMI training are constructed using phonetic and graphemic lexicons. For this task, it is found that having a long-span temporal history reduces the difference in performance between the two forms of models. In addition, system combination is examined, using parameter smoothing and hypothesis combination. As the combination approaches become more complicated the difference between the phonetic and graphemic systems further decreases. Finally, for all configurations examined the combination of phonetic and graphemic systems yields consistent gains.
dc.description.sponsorshipThis research was partly funded under the ALTA Institute, University of Cambridge. Thanks to Cambridge English, University of Cambridge, for supporting this research.
dc.publisherIEEE
dc.titlePhonetic and graphemic systems for multi-genre broadcast transcription
dc.typeConference Object
prism.endingPage5903
prism.publicationDate2018
prism.publicationNameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
prism.startingPage5899
prism.volume2018-April
dc.identifier.doi10.17863/CAM.31917
dcterms.dateAccepted2018-02-01
rioxxterms.versionofrecord10.1109/ICASSP.2018.8462353
rioxxterms.licenseref.urihttp://www.rioxx.net/licenses/all-rights-reserved
rioxxterms.licenseref.startdate2018-09-10
dc.contributor.orcidGales, Mark [0000-0002-5311-8219]
dc.publisher.urlhttps://ieeexplore.ieee.org/document/8462353
rioxxterms.typeConference Paper/Proceeding/Abstract
pubs.funder-project-idCambridge Assessment (unknown)
dc.identifier.urlhttps://ieeexplore.ieee.org/document/8462353
pubs.conference-name2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
pubs.conference-start-date2018-04-15
pubs.conference-finish-date2018-04-20
rioxxterms.freetoread.startdate2019-09-10


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record