Sounds of COVID-19: exploring realistic performance of audio-based digital testing
Authors
Bondareva, Erika
Brown, Chloë
Chauhan, Jagmohan
Dang, Ting
Grammenos, Andreas
Hasthanasombat, Apinan
Floto, Andres
Mascolo, Cecilia
Publication Date
2022-01-28Journal Title
npj Digital Medicine
Publisher
Nature Publishing Group UK
Volume
5
Issue
1
Language
en
Type
Article
This Version
VoR
Metadata
Show full item recordCitation
Han, J., Xia, T., Spathis, D., Bondareva, E., Brown, C., Chauhan, J., Dang, T., et al. (2022). Sounds of COVID-19: exploring realistic performance of audio-based digital testing. npj Digital Medicine, 5 (1) https://doi.org/10.1038/s41746-021-00553-x
Description
Funder: European Research Council Advanced Research Grant Project 833296
Abstract
Abstract: To identify Coronavirus disease (COVID-19) cases efficiently, affordably, and at scale, recent work has shown how audio (including cough, breathing and voice) based approaches can be used for testing. However, there is a lack of exploration of how biases and methodological decisions impact these tools’ performance in practice. In this paper, we explore the realistic performance of audio-based digital testing of COVID-19. To investigate this, we collected a large crowdsourced respiratory audio dataset through a mobile app, alongside symptoms and COVID-19 test results. Within the collected dataset, we selected 5240 samples from 2478 English-speaking participants and split them into participant-independent sets for model development and validation. In addition to controlling the language, we also balanced demographics for model training to avoid potential acoustic bias. We used these audio samples to construct an audio-based COVID-19 prediction model. The unbiased model took features extracted from breathing, coughs and voice signals as predictors and yielded an AUC-ROC of 0.71 (95% CI: 0.65–0.77). We further explored several scenarios with different types of unbalanced data distributions to demonstrate how biases and participant splits affect the performance. With these different, but less appropriate, evaluation strategies, the performance could be overestimated, reaching an AUC up to 0.90 (95% CI: 0.85–0.95) in some circumstances. We found that an unrealistic experimental setting can result in misleading, sometimes over-optimistic, performance. Instead, we reported complete and reliable results on crowd-sourced data, which would allow medical professionals and policy makers to accurately assess the value of this technology and facilitate its deployment.
Keywords
Article, /692/308/53, /692/308/174, article
Identifiers
s41746-021-00553-x, 553
External DOI: https://doi.org/10.1038/s41746-021-00553-x
This record's URL: https://www.repository.cam.ac.uk/handle/1810/333394
Rights
Licence:
http://creativecommons.org/licenses/by/4.0/
Statistics
Total file downloads (since January 2020). For more information on metrics see the
IRUS guide.