Show simple item record

dc.contributor.authorBance, Manohar
dc.contributor.authorBrochier, Tim
dc.contributor.authorVickers, Deborah
dc.contributor.authorGoehring, Tobias
dc.contributor.authorRoberts, iwan
dc.contributor.authorSchlittenlacher, josef
dc.contributor.authorJiang, chen
dc.date.accessioned2022-04-11T23:30:58Z
dc.date.available2022-04-11T23:30:58Z
dc.date.issued2022-11
dc.identifier.issn0018-9294
dc.identifier.urihttps://www.repository.cam.ac.uk/handle/1810/336003
dc.description.abstractAbstract— Goal: Advances in computational models of biological systems and artificial neural networks enable rapid virtual prototyping of neuroprostheses, accelerating innovation in the field. Here, we present an end-to-end computational model for predicting speech perception with cochlear implants (CI), the most widely-used neuroprosthesis. Methods: The model integrates CI signal processing, a finite element model of the electrically-stimulated cochlea, and an auditory nerve model to predict neural responses to speech stimuli. An automatic speech recognition neural network is then used to extract phoneme-level speech perception from these neural response patterns. Results: Compared to human CI listener data, the model predicts similar patterns of speech perception and misperception, captures between-phoneme differences in perceptibility, and replicates effects of stimulation parameters and noise on speech recognition. Information transmission analysis at different stages along the CI processing chain indicates that the bottleneck of information flow occurs at the electrode-neural interface, corroborating studies in CI listeners. Conclusion: An end-to-end model of CI speech perception replicated phoneme-level CI speech perception patterns, and was used to quantify information degradation through the CI processing chain. Significance: This type of model shows great promise for developing and optimizing new and existing neuroprostheses. Index Terms— neural prostheses, cochlear implants, computational models, automatic speech recognition, signal processing, information transmission, neural networks
dc.description.sponsorshipHB Allen Trust charity
dc.publisherInstitute of Electrical and Electronics Engineers
dc.rightsAttribution 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectHumans
dc.subjectCochlear Implants
dc.subjectSpeech Perception
dc.subjectCochlear Implantation
dc.subjectNoise
dc.subjectCochlear Nerve
dc.titleFrom Microphone to Phoneme: An End-to-End Computational Neural Model for Predicting Speech Perception with Cochlear Implants
dc.typeArticle
dc.publisher.departmentDepartment of Clinical Neurosciences
dc.date.updated2022-04-11T12:41:43Z
prism.publicationNameIEEE Transactions on Biomedical Engineering
dc.identifier.doi10.17863/CAM.83435
dcterms.dateAccepted2022-03-26
rioxxterms.versionofrecord10.1109/TBME.2022.3167113
rioxxterms.versionAM
dc.contributor.orcidBance, Manohar [0000-0001-8050-3617]
dc.contributor.orcidVickers, Deborah [0000-0002-7498-5637]
dc.contributor.orcidGoehring, Tobias [0000-0002-9038-3310]
dc.contributor.orcidRoberts, Iwan [0000-0003-0826-4142]
dc.identifier.eissn1558-2531
dc.publisher.urlhttps://ieeexplore.ieee.org/document/9756885
rioxxterms.typeJournal Article/Review
pubs.funder-project-idH.B. Allen Charitable Trust (Unknown)
pubs.funder-project-idWilliam Demant Foundation (Case no. 20-0390)
pubs.funder-project-idNational Institute for Health Research (NIHR) (via Guy's and St Thomas' NHS Foundation Trust) (201608)
pubs.funder-project-idWellcome Trust (204845/Z/16/Z)
pubs.funder-project-idMedical Research Council (MR/S002537/1)
cam.issuedOnline2022-04-13
cam.orpheus.counter1
cam.depositDate2022-04-11
pubs.licence-identifierapollo-deposit-licence-2-1
pubs.licence-display-nameApollo Repository Deposit Licence Agreement


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International
Except where otherwise noted, this item's licence is described as Attribution 4.0 International