Latent Generative Replay for Resource-Efficient Continual Learning of Facial Expressions

Authors
Stoychev, Samuil 
Gunes, Hatice 

Loading...
Thumbnail Image
Type
Conference Object
Change log
Abstract

Real-world Facial Expression Recognition (FER) systems require models to constantly learn and adapt with novel data. Traditional Machine Learning (ML) approaches struggle to adapt to such dynamics as models need to be re-trained from scratch with a combination of both old and new data. Replay-based Continual Learning (CL) provides a solution to this problem, either by storing previously seen data samples in memory, sampling and interleaving them with novel data (rehearsal) or by using a generative model to simulate pseudo- samples to replay past knowledge (pseudo-rehearsal). Yet, the high memory footprint of rehearsal and the high computational cost of pseudo-rehearsal limit the real-world application of such methods, especially on resource-constrained devices. To address this, we propose Latent Generative Replay (LGR) for pseudo-rehearsal of low-dimensional latent features to mitigate forgetting in a resource-efficient manner. We adapt popular CL strategies to use LGR instead of generating pseudo-samples, resulting in performance upgrades when evaluated on the CK+, RAF-DB and AffectNet FER benchmarks where LGR significantly reduces the memory and resource consumption of replay-based CL without compromising model performance.

Publication Date
Online Publication Date
Acceptance Date
2022-09-11
Keywords
Journal Title
Journal ISSN
Volume Title
Publisher
Sponsorship
EPSRC (2107412)
Engineering and Physical Sciences Research Council (EP/R030782/1)
EPSRC