Repository logo
 

Expected F-Measure Training for Shift-Reduce Parsing with Recurrent Neural Networks

Published version
Peer-reviewed

Type

Conference Object

Change log

Authors

Xu, W 
Auli, M 
Clark, S 

Abstract

We present expected F-measure training for shift-reduce parsing with RNNs, which enables the learning of a global parsing model optimized for sentence-level F1. We apply the model to CCG parsing, where it improves over a strong greedy RNN baseline, by 1.47% F1, yielding state-of-the-art results for shift-reduce CCG parsing.

Description

Keywords

Journal Title

Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Conference Name

2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Journal ISSN

Volume Title

Publisher

Association for Computational Linguistics
Sponsorship
Xu acknowledges the Carnegie Trust for the Universities of Scotland and the Cambridge Trusts for funding. Clark is supported by ERC Starting Grant DisCoTex (306920) and EPSRC grant EP/I037512/1.