Repository logo
 

Natural Language Understanding and Generation for Task-Oriented Dialogue


Type

Thesis

Change log

Authors

Tseng, Bo-Hsiang 

Abstract

The success of deep learning methods has stimulated the rapid development of many NLP research areas. Still, task-oriented dialogue modelling remains challenging due to both the inherent complexity of human language and task difficulty. Moreover, building such systems usually relies on large amounts of data with fine-grained annotations, and in many situations, it is difficult to obtain such data. It is thus important for dialogue systems to learn efficiently in low-resource scenarios so that the models can still effectively fulfill their tasks. This thesis aims to provide novel methods to tackle these difficulties in dialogue modelling.

To communicate, most commonly, a dialogue system converts a semantic representation (e.g., a dialogue act) into natural language in a process known as Natural Language Generation (NLG). A tree-based NLG model is proposed and shown to be more easily adapted to unseen domains in comparison to other models. This desirable property arises due to the fact that modelling semantic structure facilitates knowledge sharing between source and target domains. We also show that the NLG task can be jointly learned with its dual task, the natural language understanding (NLU), which maps natural language utterances to semantic counterparts. Our approach consists of a stochastic generative model with a shared latent variable for two tasks, which can be trained with significantly less data than individual components.

The focus then shifts to a more general setup of dialogue generation. In end-to-end dialogue modelling, systems consume user utterances and learn to directly generate responses, where intermediate dialogue acts are usually used as auxiliary learning signals for model optimisation. We show that semi-supervised methods that were proposed for computer vision tasks can be beneficial to dialogue modelling. We also address the problem of developing dialogue systems when little training data is available. To this end, we propose a learning framework where user and dialogue models are jointly optimised. We show that the data generated by their interaction can be used to further optimise the two models and leads to improved model performance. Yet again, this approach reduces the amount of data for end-to-end dialogue modelling on low-resource domains.

Lastly, an understanding model is proposed to address the prevalent phenomena of coreference and ellipsis in dialogues. This model first performs coreference resolution and then rewrites the input user utterance into a complete sentence that resolves coreferent entities and omitted information. As a side contribution, the acquired data for model training is released to the research community.

Description

Date

2021-09-01

Advisors

Byrne, William

Keywords

task-oriented dialogue, machine learning, natural language processing, natural language understanding, natural language generation

Qualification

Doctor of Philosophy (PhD)

Awarding Institution

University of Cambridge