A unified approach to quantifying algorithmic unfairness: Measuring individual & group unfairness via inequality indices
View / Open Files
Authors
Speicher, T
Heidari, H
Grgic-Hlaca, N
Gummadi, KP
Singla, A
Weller, A
Zafar, MB
Publication Date
2018-07-02Journal Title
Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
Publisher
ACM
Type
Article
This Version
AM
Metadata
Show full item recordCitation
Speicher, T., Heidari, H., Grgic-Hlaca, N., Gummadi, K., Singla, A., Weller, A., & Zafar, M. (2018). A unified approach to quantifying algorithmic unfairness: Measuring individual & group unfairness via inequality indices. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining https://doi.org/10.1145/3219819.3220046
Abstract
Discrimination via algorithmic decision making has received considerable
attention. Prior work largely focuses on defining conditions for fairness, but
does not define satisfactory measures of algorithmic unfairness. In this paper,
we focus on the following question: Given two unfair algorithms, how should we
determine which of the two is more unfair? Our core idea is to use existing
inequality indices from economics to measure how unequally the outcomes of an
algorithm benefit different individuals or groups in a population. Our work
offers a justified and general framework to compare and contrast the
(un)fairness of algorithmic predictors. This unifying approach enables us to
quantify unfairness both at the individual and the group level. Further, our
work reveals overlooked tradeoffs between different fairness notions: using our
proposed measures, the overall individual-level unfairness of an algorithm can
be decomposed into a between-group and a within-group component. Earlier
methods are typically designed to tackle only between-group unfairness, which
may be justified for legal or other reasons. However, we demonstrate that
minimizing exclusively the between-group component may, in fact, increase the
within-group, and hence the overall unfairness. We characterize and illustrate
the tradeoffs between our measures of (un)fairness and the prediction accuracy.
Keywords
cs.LG, cs.LG, cs.CY, stat.ML
Sponsorship
Leverhulme Trust (RC-2015-067)
Alan Turing Institute (unknown)
Identifiers
External DOI: https://doi.org/10.1145/3219819.3220046
This record's URL: https://www.repository.cam.ac.uk/handle/1810/288102
Rights
Licence:
http://www.rioxx.net/licenses/all-rights-reserved
Statistics
Total file downloads (since January 2020). For more information on metrics see the
IRUS guide.
Recommended or similar items
The current recommendation prototype on the Apollo Repository will be turned off on 03 February 2023. Although the pilot has been fruitful for both parties, the service provider IKVA is focusing on horizon scanning products and so the recommender service can no longer be supported. We recognise the importance of recommender services in supporting research discovery and are evaluating offerings from other service providers. If you would like to offer feedback on this decision please contact us on: support@repository.cam.ac.uk