Repository logo
 

Could this be next for corpus linguistics? Methods of semi-automatic data annotation with contextualized word embeddings

Published version
Peer-reviewed

Repository DOI


Change log

Abstract

Abstract This paper explores how linguistic data annotation can be made (semi-)automatic by means of machine learning. More specifically, we focus on the use of “contextualized word embeddings” (i.e. vectorized representations of the meaning of word tokens based on the sentential context in which they appear) extracted by large language models (LLMs). In three example case studies, we assess how the contextualized embeddings generated by LLMs can be combined with different machine learning approaches to serve as a flexible, adaptable semi-automated data annotation tool for corpus linguists. Subsequently, to evaluate which approach is most reliable across the different case studies, we use a Bayesian framework for model comparison, which estimates the probability that the performance of a given classification approach is stronger than that of an alternative approach. Our results indicate that combining contextualized word embeddings with metric fine-tuning yield highly accurate automatic annotations.

Description

Peer reviewed: True


Publication status: Published


Funder: Platform Digital Infrastructure Social Sciences and Humanities (PDI-SSH)

Journal Title

Linguistics Vanguard

Conference Name

Journal ISSN

2199-174X
2199-174X

Volume Title

10

Publisher

Walter de Gruyter GmbH

Rights and licensing

Except where otherwised noted, this item's license is described as http://creativecommons.org/licenses/by/4.0