A unified approach to quantifying algorithmic unfairness: Measuring individual & group unfairness via inequality indices
View / Open Files
Publication Date
2018-07-19Journal Title
Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
Publisher
ACM
Type
Article
This Version
AM
Metadata
Show full item recordCitation
Speicher, T., Heidari, H., Grgic-Hlaca, N., Gummadi, K., Singla, A., Weller, A., & Zafar, M. (2018). A unified approach to quantifying algorithmic unfairness: Measuring individual & group unfairness via inequality indices. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining https://doi.org/10.1145/3219819.3220046
Abstract
Discrimination via algorithmic decision making has received considerable
attention. Prior work largely focuses on defining conditions for fairness, but
does not define satisfactory measures of algorithmic unfairness. In this paper,
we focus on the following question: Given two unfair algorithms, how should we
determine which of the two is more unfair? Our core idea is to use existing
inequality indices from economics to measure how unequally the outcomes of an
algorithm benefit different individuals or groups in a population. Our work
offers a justified and general framework to compare and contrast the
(un)fairness of algorithmic predictors. This unifying approach enables us to
quantify unfairness both at the individual and the group level. Further, our
work reveals overlooked tradeoffs between different fairness notions: using our
proposed measures, the overall individual-level unfairness of an algorithm can
be decomposed into a between-group and a within-group component. Earlier
methods are typically designed to tackle only between-group unfairness, which
may be justified for legal or other reasons. However, we demonstrate that
minimizing exclusively the between-group component may, in fact, increase the
within-group, and hence the overall unfairness. We characterize and illustrate
the tradeoffs between our measures of (un)fairness and the prediction accuracy.
Sponsorship
Leverhulme Trust (RC-2015-067)
Alan Turing Institute (unknown)
Identifiers
External DOI: https://doi.org/10.1145/3219819.3220046
This record's URL: https://www.repository.cam.ac.uk/handle/1810/288102
Rights
Licence:
http://www.rioxx.net/licenses/all-rights-reserved
Statistics
Total file downloads (since January 2020). For more information on metrics see the
IRUS guide.