Repository logo
 

Evaluation of deep marginal feedback cancellation for hearing aids using speech and music.

Published version
Peer-reviewed

Repository DOI


Change log

Authors

Xu, Chenyang 
Wang, Meihuang 
Li, Xiaodong 

Abstract

Speech and music both play fundamental roles in daily life. Speech is important for communication while music is important for relaxation and social interaction. Both speech and music have a large dynamic range. This does not pose problems for listeners with normal hearing. However, for hearing-impaired listeners, elevated hearing thresholds may result in low-level portions of sound being inaudible. Hearing aids with frequency-dependent amplification and amplitude compression can partly compensate for this problem. However, the gain required for low-level portions of sound to compensate for the hearing loss can be larger than the maximum stable gain of a hearing aid, leading to acoustic feedback. Feedback control is used to avoid such instability, but this can lead to artifacts, especially when the gain is only just below the maximum stable gain. We previously proposed a deep-learning method called DeepMFC for controlling feedback and reducing artifacts and showed that when the sound source was speech DeepMFC performed much better than traditional approaches. However, its performance using music as the sound source was not assessed and the way in which it led to improved performance for speech was not determined. The present paper reveals how DeepMFC addresses feedback problems and evaluates DeepMFC using speech and music as sound sources with both objective and subjective measures. DeepMFC achieved good performance for both speech and music when it was trained with matched training materials. When combined with an adaptive feedback canceller it provided over 13 dB of additional stable gain for hearing-impaired listeners.

Description

Peer reviewed: True

Keywords

Feedback cancellation, coloration effect, deep learning, howling, speech perception, Humans, Speech Perception, Hearing Aids, Speech, Music, Feedback, Acoustic Stimulation, Signal Processing, Computer-Assisted

Journal Title

Trends Hear

Conference Name

Journal ISSN

2331-2165
2331-2165

Volume Title

27

Publisher

SAGE Publications
Sponsorship
National Key Research & Development (R&D) Program of China (2021YFB3201702)