Automatic Assessment of Conversational Speaking Tests


Loading...
Thumbnail Image
Type
Conference Object
Change log
Authors
McKnight, Simon Webster 
Civelekoglu, Arda 
Gales, Mark JF 
Banno, Stefano 
Liusie, Adian 
Abstract

Many speaking tests are conversational, dialogic, in form with an interlocutor talking to one or more candidates. This paper investigates how to automatically assess such a test. State-of-the-art approaches are used in a multi-stage pipeline: diarization and speaker assignment, to detect who is speaking and when; automatic speech recognition (ASR), to produce a transcript; and finally assessment. Each presents challenges which are investigated in the paper. Advanced foundation model-based auto-markers are examined: an ensemble of Longformer-based models that operates on the ASR output text; and a wav2vec2-based system that works directly on the audio. The two are combined to yield the final score. This fully automated system is evaluated in terms of ASR performance, and related impact of candidate assignment, as well as prediction of the candidate mark on data from the Occupational English Test. This is a conversational speaking test for L2 English healthcare professionals.

Description
Keywords
Journal Title
Conference Name
Workshop on Speech and Language Technology in Education (SLaTE)
Journal ISSN
Volume Title
Publisher
Publisher DOI
Publisher URL
Sponsorship
Cambridge Assessment (unknown)
Cambridge Assessment (Unknown)
This paper reports on research supported by Cambridge University Press \& Assessment (CUP&A), a department of The Chancellor, Masters, and Scholars of the University of Cambridge. The OET data was provided by Cambridge Boxhill Language Assessment Trust (CBLA).