Show simple item record

dc.contributor.authorBance, Manohar
dc.contributor.authorBrochier, Tim
dc.contributor.authorVickers, Deborah
dc.contributor.authorGoehring, Tobias
dc.contributor.authorRoberts, iwan
dc.contributor.authorSchlittenlacher, josef
dc.contributor.authorJiang, chen
dc.description.abstractAbstract— Goal: Advances in computational models of biological systems and artificial neural networks enable rapid virtual prototyping of neuroprostheses, accelerating innovation in the field. Here, we present an end-to-end computational model for predicting speech perception with cochlear implants (CI), the most widely-used neuroprosthesis. Methods: The model integrates CI signal processing, a finite element model of the electrically-stimulated cochlea, and an auditory nerve model to predict neural responses to speech stimuli. An automatic speech recognition neural network is then used to extract phoneme-level speech perception from these neural response patterns. Results: Compared to human CI listener data, the model predicts similar patterns of speech perception and misperception, captures between-phoneme differences in perceptibility, and replicates effects of stimulation parameters and noise on speech recognition. Information transmission analysis at different stages along the CI processing chain indicates that the bottleneck of information flow occurs at the electrode-neural interface, corroborating studies in CI listeners. Conclusion: An end-to-end model of CI speech perception replicated phoneme-level CI speech perception patterns, and was used to quantify information degradation through the CI processing chain. Significance: This type of model shows great promise for developing and optimizing new and existing neuroprostheses. Index Terms— neural prostheses, cochlear implants, computational models, automatic speech recognition, signal processing, information transmission, neural networks
dc.description.sponsorshipHB Allen Trust charity
dc.publisherInstitute of Electrical and Electronics Engineers
dc.rightsAttribution 4.0 International
dc.subjectCochlear Implants
dc.subjectSpeech Perception
dc.subjectCochlear Implantation
dc.subjectCochlear Nerve
dc.titleFrom Microphone to Phoneme: An End-to-End Computational Neural Model for Predicting Speech Perception with Cochlear Implants
dc.publisher.departmentDepartment of Clinical Neurosciences
prism.publicationNameIEEE Transactions on Biomedical Engineering
dc.contributor.orcidBance, Manohar [0000-0001-8050-3617]
dc.contributor.orcidVickers, Deborah [0000-0002-7498-5637]
dc.contributor.orcidGoehring, Tobias [0000-0002-9038-3310]
dc.contributor.orcidRoberts, Iwan [0000-0003-0826-4142]
rioxxterms.typeJournal Article/Review
pubs.funder-project-idH.B. Allen Charitable Trust (Unknown)
pubs.funder-project-idWilliam Demant Foundation (Case no. 20-0390)
pubs.funder-project-idNational Institute for Health Research (NIHR) (via Guy's and St Thomas' NHS Foundation Trust) (201608)
pubs.funder-project-idWellcome Trust (204845/Z/16/Z)
pubs.funder-project-idMedical Research Council (MR/S002537/1)
pubs.licence-display-nameApollo Repository Deposit Licence Agreement

Files in this item


This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International
Except where otherwise noted, this item's licence is described as Attribution 4.0 International