Repository logo
 

Attending to characters in neural sequence labeling models

Published version
Peer-reviewed

Loading...
Thumbnail Image

Change log

Abstract

Sequence labeling architectures use word embeddings for capturing similarity, but suffer when handling previously unseen or rare words. We investigate character-level extensions to such models and propose a novel architecture for combining alternative word representations. By using an attention mechanism, the model is able to dynamically decide how much information to use from a word- or character-level component. We evaluated different architectures on a range of sequence labeling datasets, and character-level extensions were found to improve performance on every benchmark. In addition, the proposed attention-based architecture delivered the best results even with a smaller number of trainable parameters.

Description

Keywords

Journal Title

Coling 2016 26th International Conference on Computational Linguistics Proceedings of Coling 2016 Technical Papers

Conference Name

The International Conference on Computational Linguistics (COLING)

Journal ISSN

Volume Title

Publisher

Rights and licensing

Except where otherwised noted, this item's license is described as Attribution 4.0 International
Sponsorship
Cambridge Assessment (unknown)