Investigating Bias and Fairness in Facial Expression Recognition.
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
Recognition of expressions of emotions and a ect from facial images is a well-studied research problem in the elds of a ective computing and computer vision with a large number of datasets available containing facial images and corresponding expression labels. However, virtually none of these datasets have been acquired with consideration of fair distribution across the human population. Therefore, in this work, we undertake a systematic investigation of bias and fairness in facial expression recognition by comparing three di erent approaches, namely a baseline, an attribute-aware and a disentangled approach, on two wellknown datasets, RAF-DB and CelebA. Our results indicate that: (i) data augmentation improves the accuracy of the baseline model, but this alone is unable to mitigate the bias e ect; (ii) both the attribute-aware and the disentangled approaches equipped with data augmentation perform better than the baseline approach in terms of accuracy and fairness; (iii) the disentangled approach is the best for mitigating demographic bias; and (iv) the bias mitigation strategies are more suitable in the existence of uneven attribute distribution or imbalanced number of subgroup data.