“It’s not Fair!” – Fairness for a Small Dataset of Multi-modal Dyadic Mental Well-being Coaching
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
In recent years, the affective computing research community has put ethics at the centre of its research agenda. However, many of the currently available datasets for affective computing are ‘small’, making bias and debias analysis challenging. This paper presents the first work to explore bias analysis and mitigation of a small temporal multi-modal dataset for mental well-being by adopting different data augmentation techniques. This proof-of-concept work’s contributions include: i) introducing a novel small temporal multi-modal dataset of dyadic interactions during mental well-being coaching; ii) providing multi-modal and feature importance analyses evaluated via modelling performance and fairness metrics across both high and low-level features; and iii) proposing a simple and effective data augmentation strategy (MixFeat) to debias the small dataset presented in this paper. We conduct extensive experiments and analyses to compare our proposed method against other baseline data augmentation method across various uni-modal and multi-modal setups. Our results indicate that, regardless of the dimensionality of the dataset at hand, the inclusion of a bias analysis section in the conference papers is viable. This paper is therefore a call to the community to include a bias analysis section in ACII conference submissions, similar to the ablation studies conducted in papers submitted to major machine learning conferences.
Description
Journal Title
Conference Name
Journal ISSN
2156-8111
Volume Title
Publisher
Publisher DOI
Rights and licensing
Sponsorship
Alan Turing Institute (ATIPO000004438)

