Repository logo

Continual Learning for Affective Robotics



Change log



Recent advancements in Artificial Intelligence (AI) and Human-Robot Interaction (HRI) have enabled robots to be integrated into daily human life. Operating in human-centred environments, these robots need to actively participate in the human ‘affective loop’, sensing and interpreting human socio-emotional behaviours while also learning to respond in a manner that fosters their social and emotional wellbeing. Embedding affective robots with learning mechanisms that enable such a robust understanding of human behaviour as well as their own role in an interaction forms the central focus of affective robotics research. Current Machine Learning (ML)-based solutions for realising affective capabilities in robots, be it robust affect perception or behaviour generation, are primed towards generalisation of application. Pre-trained on volumes of data, these solutions, although enabling a wide variety of applications, are static and unable to adapt sufficiently to the dynamics of real-world interactions. Affective robots, on the other hand, need personalised interaction capabilities that, sensitive to an individual’s socio-emotional behaviour, can adapt affective interactions towards them, expanding their learning on the go to include novel information, while ensuring past knowledge is preserved.

Addressing these challenges, this dissertation proposes the novel application of the Continual Learning (CL) paradigm for affective robotics, enabling continual and lifelong adaptation capabilities in robots. It provides the foundational formulations that translate key principles of CL-based adaptation for affective learning in robots. Furthermore, investigating learning at every stage of the ‘affective loop’, it reflects upon the key desiderata for affective robots in terms of continual and personalised affect perception and context-appropriate behaviour generation.

Starting with affect perception, this dissertation presents the first extensive benchmark on Continual Facial Expression Recognition (ConFER), evaluating CL-based approaches for continually learning facial expression classes under different learning settings. Despite enabling incremental learning, ConFER does not focus on personalised affect perception, another key requirement for affective robots. To address this, a novel framework is proposed using Continual Learning with Imagination for Facial Expression Recognition (CLIFER). Inspired from the cognitive processes in the human brain focused on memory-based learning and mental imagery, CLIFER incrementally learns facial expression classes while personalising towards individual affective expression, using imagination to augment person-specific learning. CLIFER is shown to achieve state-of-the-art (SOTA) results on different benchmark evaluations.

Learning under such dynamic conditions, affective robots need to remain fair and equitable, ensuring no individual (or group) is disadvantaged and the robots’ perception is bias-free. Exploring different domain groups based on gender and race attributes, this dissertation proposes and evaluates CL as an effective strategy to ensure fairness in Facial Expression Recognition (FER) systems, guarding against biases arising from imbalances in data distributions. Benchmark comparisons against SOTA ML-based approaches highlight the superior bias-mitigation capabilities of CL-based methods.

Finally, this dissertation explores sensing and adapting to human affective behaviour during wellbeing coaching sessions as an application scenario for continually learning affective robots. Enabling personalised human-robot interactions, the Pepper robot is embedded with CLIFER-based facial affect perception, allowing it to personalise its learning towards individual affective behaviour, dynamically adapting the interaction flow to generate naturalistic responses, sensitive to the participants’ affective state. To evaluate such continual personalisation ability in Pepper, a user study is conducted with 20 participants demonstrating that using CL-based personalisation significantly improves the subjective experience of the participants interacting with Pepper.

The theoretical formulations, benchmarks, and frameworks presented in this dissertation initiate a novel field of enquiry exploring the benefits of CL-based learning for affective robots. This dissertation aims to create a stepping stone for affective robotics research to consider taking a continual and personalised learning approach towards building fully autonomous and adaptive robots that are purposeful and engaging in their interactions with human users.





Gunes, Hatice


Affective Computing, Affective Robotics, Continual Learning, Deep Learning, Facial Expression Recognition, Human Behaviour Understanding, Human-Robot Interaction, Neural Networks, Robotics, Social Robotics


Doctor of Philosophy (PhD)

Awarding Institution

University of Cambridge
EPSRC (2107412)