Repository logo
 

A pragmatic guide to geoparsing evaluation

Published version
Peer-reviewed

Loading...
Thumbnail Image

Type

Article

Change log

Authors

Pilehvar, Mohammad Taher 
Collier, Nigel 

Abstract

Abstract: Empirical methods in geoparsing have thus far lacked a standard evaluation framework describing the task, metrics and data used to compare state-of-the-art systems. Evaluation is further made inconsistent, even unrepresentative of real world usage by the lack of distinction between the different types of toponyms, which necessitates new guidelines, a consolidation of metrics and a detailed toponym taxonomy with implications for Named Entity Recognition (NER) and beyond. To address these deficiencies, our manuscript introduces a new framework in three parts. (Part 1) Task Definition: clarified via corpus linguistic analysis proposing a fine-grained Pragmatic Taxonomy of Toponyms. (Part 2) Metrics: discussed and reviewed for a rigorous evaluation including recommendations for NER/Geoparsing practitioners. (Part 3) Evaluation data: shared via a new dataset called GeoWebNews to provide test/train examples and enable immediate use of our contributions. In addition to fine-grained Geotagging and Toponym Resolution (Geocoding), this dataset is also suitable for prototyping and evaluating machine learning NLP models.

Description

Keywords

Original Paper, Geoparsing, Toponym resolution, Geotagging, Geocoding, Named Entity Recognition, Machine learning, Evaluation framework, Geonames, Toponyms, Natural language understanding, Pragmatics

Journal Title

Language Resources and Evaluation

Conference Name

Journal ISSN

1574-020X
1574-0218

Volume Title

54

Publisher

Springer Netherlands
Sponsorship
Natural Environment Research Council (NE/M009009/1)
Medical Research Council (MR/M025160/1)
Engineering and Physical Sciences Research Council (EP/M005089/1)