Repository logo
 

Towards Gender Fairness for Mental Health Prediction

Accepted version
Peer-reviewed

Type

Conference Object

Change log

Authors

Cheong, J 
Kuzucu, S 
Kalkan, S 

Abstract

Mental health is becoming an increasingly prominent health challenge. Despite a plethora of studies analysing and mitigating bias for a variety of tasks such as face recognition and credit scoring, research on machine learning (ML) fairness for mental health has been sparse to date. In this work, we focus on gender bias in mental health and make the following contributions. First, we examine whether bias exists in existing mental health datasets and algorithms. Our experiments were conducted using Depresjon, Psykose and D-Vlog. We identify that both data and algorithmic bias exist. Second, we analyse strategies that can be deployed at the pre-processing, in-processing and post-processing stages to mitigate for bias and evaluate their effectiveness. Third, we investigate factors that impact the efficacy of existing bias mitigation strategies and outline recommendations to achieve greater gender fairness for mental health. Upon obtaining counter-intuitive results on D-Vlog dataset, we undertake further experiments and analyses, and provide practical suggestions to avoid hampering bias mitigation efforts in ML for mental health.

Description

Keywords

Journal Title

IJCAI International Joint Conference on Artificial Intelligence

Conference Name

The 32nd International Joint Conference on Artificial Intelligence

Journal ISSN

1045-0823

Volume Title

Publisher

International Joint Conferences on Artificial Intelligence Organization
Sponsorship
Engineering and Physical Sciences Research Council (EP/R030782/1)
Alan Turing Institute (ATIPO000004438)
Alan Turing Institute PhD Studenship