Repository logo
 

Speaking to a metronome reduces kinematic variability in typical speakers and people who stutter.

Published version
Peer-reviewed

Repository DOI


Change log

Abstract

BACKGROUND: Several studies indicate that people who stutter show greater variability in speech movements than people who do not stutter, even when the speech produced is perceptibly fluent. Speaking to the beat of a metronome reliably increases fluency in people who stutter, regardless of the severity of stuttering. OBJECTIVES: Here, we aimed to test whether metronome-timed speech reduces articulatory variability. METHOD: We analysed vocal tract MRI data from 24 people who stutter and 16 controls. Participants repeated sentences with and without a metronome. Midsagittal images of the vocal tract from lips to larynx were reconstructed at 33.3 frames per second. Any utterances containing dysfluencies or non-speech movements (e.g. swallowing) were excluded. For each participant, we measured the variability of movements (coefficient of variation) from the alveolar, palatal and velar regions of the vocal tract. RESULTS: People who stutter had more variability than control speakers when speaking without a metronome, which was then reduced to the same level as controls when speaking with the metronome. The velar region contained more variability than the alveolar and palatal regions, which were similar. CONCLUSIONS: These results demonstrate that kinematic variability during perceptibly fluent speech is increased in people who stutter compared with controls when repeating naturalistic sentences without any alteration or disruption to the speech. This extends our previous findings of greater variability in the movements of people who stutter when producing perceptibly fluent nonwords compared with controls. These results also show, that in addition to increasing fluency in people who stutter, metronome-timed speech also reduces articulatory variability to the same level as that seen in control speakers.

Description

Acknowledgements: The authors would like to thank Juliet Semple, Nicola Aikin, Nicola Filippini, and Stuart Clare for their MRI support, Louisa Needham for her assistance with recruitment, Magdalena Saumweber for her assistance with data processing, and Sam Jones for useful feedback on the statistical analysis. We would like to thank Aivy Nguyen, Timothy Berezhnoy, and Anna Nolan for their assistance in acoustic segmentation. Lastly, but certainly not least, the authors would like to thank all the participants who took part in this study.

Journal Title

PLoS One

Conference Name

Journal ISSN

1932-6203
1932-6203

Volume Title

19

Publisher

Public Library of Science (PLoS)

Rights and licensing

Except where otherwised noted, this item's license is described as http://creativecommons.org/licenses/by/4.0/
Sponsorship
Engineering and Physical Science Research Council UK (EP/N509711/1)
Economic and Social Research Council UK (ES/J500112/1)
European Union’s Framework programme for Research and Innovation Horizon 2020 under the Marie Sklodowska-Curie (754388)
Royal Academy of Engineering (RF201617\16\23)
Medical Research Council (MR/N025539/1)
Wellcome Trust (203139/Z/16/Z)
NIHR Oxford Biomedical Research Centre (NIHR203316)
Wellcome Trust (203139/A/16/Z)