Show simple item record

dc.contributor.authorYoshioka, Takuyaen
dc.contributor.authorChen, Xieen
dc.contributor.authorGales, Marken
dc.date.accessioned2015-04-21T13:51:40Z
dc.date.available2015-04-21T13:51:40Z
dc.date.issued2014-05-04en
dc.identifier.citationAcoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, Issue Date: 4-9 May 2014, Written by: Yoshioka, T.; Xie Chen; Gales, M.J.F.en
dc.identifier.issn1520-6149
dc.identifier.urihttps://www.repository.cam.ac.uk/handle/1810/247409
dc.description.abstractOver the past few decades, a range of front-end techniques have been proposed to improve the robustness of automatic speech recognition systems against environmental distortion. While these techniques are effective for small tasks consisting of carefully designed data sets, especially when used with a classical acoustic model, there has been limited evidence that they are useful for a state-of-the-art system with large scale realistic data. This paper focuses on reverberation as a type of distortion and investigates the degree to which dereverberation processing can improve the performance of various forms of acoustic models based on deep neural networks (DNNs) in a challenging meeting transcription task using a single distant microphone. Experimental results show that dereverberation improves the recognition performance regardless of the acoustic model structure and the type of the feature vectors input into the neural networks, providing additional relative improvements of 4.7% and 4.1% to our best configured speaker-independent and speaker-adaptive DNN-based systems, respectively.
dc.description.sponsorshipXie Chen was funded by Toshiba Research Europe Ltd, Cambridge Research Lab.
dc.languageEnglishen
dc.language.isoenen
dc.publisherIEEE
dc.titleImpact of single-microphone dereverberation on DNN-based meeting transcription systemsen
dc.typeConference Object
dc.description.versionThis is the accepted manuscript of a paper published in the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, Issue Date: 4-9 May 2014, Written by: Yoshioka, T.; Xie Chen; Gales, M.J.F.).en
prism.endingPage5531
prism.publicationDate2014en
prism.publicationName2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)en
prism.startingPage5527
rioxxterms.versionofrecord10.1109/ICASSP.2014.6854660en
rioxxterms.licenseref.urihttp://www.rioxx.net/licenses/all-rights-reserveden
rioxxterms.licenseref.startdate2014-05-04en
dc.contributor.orcidGales, Mark [0000-0002-5311-8219]
rioxxterms.typeConference Paper/Proceeding/Abstracten


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record