Repository logo
 

A data-efficient and easy-to-use lip language interface based on wearable motion capture and speech movement reconstruction.

Published version
Peer-reviewed

Repository DOI


Loading...
Thumbnail Image

Type

Article

Change log

Abstract

Lip language recognition urgently needs wearable and easy-to-use interfaces for interference-free and high-fidelity lip-reading acquisition and to develop accompanying data-efficient decoder-modeling methods. Existing solutions suffer from unreliable lip reading, are data hungry, and exhibit poor generalization. Here, we propose a wearable lip language decoding technology that enables interference-free and high-fidelity acquisition of lip movements and data-efficient recognition of fluent lip language based on wearable motion capture and continuous lip speech movement reconstruction. The method allows us to artificially generate any wanted continuous speech datasets from a very limited corpus of word samples from users. By using these artificial datasets to train the decoder, we achieve an average accuracy of 92.0% across individuals (n = 7) for actual continuous and fluent lip speech recognition for 93 English sentences, even observing no training burn on users because all training datasets are artificially generated. Our method greatly minimizes users' training/learning load and presents a data-efficient and easy-to-use paradigm for lip language recognition.

Description

Keywords

Humans, Wearable Electronic Devices, Speech, Language, Lip, Movement, Male, Female, Adult, Lipreading, Motion Capture

Journal Title

Sci Adv

Conference Name

Journal ISSN

2375-2548
2375-2548

Volume Title

10

Publisher

American Association for the Advancement of Science (AAAS)
Sponsorship
EPSRC (EP/W017091/1)
EPSRC (EP/S023046/1)