Repository logo
 

Sparsity and Sentence Structure in Encoder-Decoder Attention of Summarization Systems

Accepted version
Peer-reviewed

Type

Conference Object

Change log

Authors

Gales, MJF 

Abstract

Transformer models have achieved state-of-the-art results in a wide range of NLP tasks including summarization. Training and inference using large transformer models can be computationally expensive. Previous work has focused on one important bottleneck, the quadratic self-attention mechanism in the encoder. Modified encoder architectures such as LED or LoBART use local attention patterns to address this problem for summarization. In contrast, this work focuses on the transformer's encoder-decoder attention mechanism. The cost of this attention becomes more significant in inference or training approaches that require model-generated histories. First, we examine the complexity of the encoder-decoder attention. We demonstrate empirically that there is a sparse sentence structure in document summarization that can be exploited by constraining the attention mechanism to a subset of input sentences, whilst maintaining system performance. Second, we propose a modified architecture that selects the subset of sentences to constrain the encoder-decoder attention. Experiments are carried out on abstractive summarization tasks, including CNN/DailyMail, XSum, Spotify Podcast, and arXiv.

Description

Keywords

cs.CL, cs.CL

Journal Title

EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings

Conference Name

The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Journal ISSN

Volume Title

Publisher

Sponsorship
Cambridge Assessment (Unknown)