Phonetic and graphemic systems for multi-genre broadcast transcription
dc.contributor.author | Wang, Yu | |
dc.contributor.author | Chen, X | |
dc.contributor.author | Gales, Mark | |
dc.contributor.author | Ragni, Anton | |
dc.contributor.author | Wong, JHM | |
dc.date.accessioned | 2018-11-01T14:04:18Z | |
dc.date.available | 2018-11-01T14:04:18Z | |
dc.date.issued | 2018-09-10 | |
dc.identifier.isbn | 9781538646588 | |
dc.identifier.issn | 1520-6149 | |
dc.identifier.uri | https://www.repository.cam.ac.uk/handle/1810/284543 | |
dc.description.abstract | State-of-the-art English automatic speech recognition systems typically use phonetic rather than graphemic lexicons. Graphemic systems are known to perform less well for English as the mapping from the written form to the spoken form is complicated. However, in recent years the representational power of deep-learning based acoustic models has improved, raising interest in graphemic acoustic models for English, due to the simplicity of generating the lexicon. In this paper, phonetic and graphemic models are compared for an English Multi-Genre Broadcast transcription task. A range of acoustic models based on lattice-free MMI training are constructed using phonetic and graphemic lexicons. For this task, it is found that having a long-span temporal history reduces the difference in performance between the two forms of models. In addition, system combination is examined, using parameter smoothing and hypothesis combination. As the combination approaches become more complicated the difference between the phonetic and graphemic systems further decreases. Finally, for all configurations examined the combination of phonetic and graphemic systems yields consistent gains. | |
dc.description.sponsorship | This research was partly funded under the ALTA Institute, University of Cambridge. Thanks to Cambridge English, University of Cambridge, for supporting this research. | |
dc.publisher | IEEE | |
dc.title | Phonetic and graphemic systems for multi-genre broadcast transcription | |
dc.type | Conference Object | |
prism.endingPage | 5903 | |
prism.publicationDate | 2018 | |
prism.publicationName | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings | |
prism.startingPage | 5899 | |
prism.volume | 2018-April | |
dc.identifier.doi | 10.17863/CAM.31917 | |
dcterms.dateAccepted | 2018-02-01 | |
rioxxterms.versionofrecord | 10.1109/ICASSP.2018.8462353 | |
rioxxterms.licenseref.uri | http://www.rioxx.net/licenses/all-rights-reserved | |
rioxxterms.licenseref.startdate | 2018-09-10 | |
dc.contributor.orcid | Gales, Mark [0000-0002-5311-8219] | |
dc.publisher.url | https://ieeexplore.ieee.org/document/8462353 | |
rioxxterms.type | Conference Paper/Proceeding/Abstract | |
pubs.funder-project-id | Cambridge Assessment (unknown) | |
dc.identifier.url | https://ieeexplore.ieee.org/document/8462353 | |
pubs.conference-name | 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) | |
pubs.conference-start-date | 2018-04-15 | |
pubs.conference-finish-date | 2018-04-20 | |
rioxxterms.freetoread.startdate | 2019-09-10 |
Files in this item
This item appears in the following Collection(s)
-
Cambridge University Research Outputs
Research outputs of the University of Cambridge