Repository logo
 

Age-related bias and artificial intelligence: a scoping review

Published version
Peer-reviewed

Repository DOI


Change log

Authors

Chu, Charlene H 
Donato-Woodger, Simon 
Nyrup, Rune 

Abstract

jats:titleAbstract</jats:title>jats:pThere are widespread concerns about bias and discriminatory output related to artificial intelligence (AI), which may propagate social biases and disparities. Digital ageism refers to ageism reflected design, development, and implementation of AI systems and technologies and its resultant data. Currently, the prevalence of digital ageism and the sources of AI bias are unknown. A scoping review informed by the Arksey and O’Malley methodology was undertaken to explore age-related bias in AI systems, identify how AI systems encode, produce, or reinforce age-related bias, what is known about digital ageism, and the social, ethical and legal implications of age-related bias. A comprehensive search strategy that included five electronic bases and grey literature sources including legal sources was conducted. A framework of machine learning biases spanning from data to user by Mehrabi et al. is used to present the findings (Mehrabi et al. 2021). The academic search resulted in 7595 articles that were screened according to the inclusion criteria, of which 307 were included for full-text screening, and 49 were included in this review. The grey literature search resulted in 2639 documents screened, of which 235 were included for full text screening, and 25 were found to be relevant to the research questions pertaining to age and AI. As a result, a total of 74 documents were included in this review. The results show that the most common AI applications that intersected with age were age recognition and facial recognition systems. The most frequent machine learning algorithms used were convolutional neural networks and support vector machines. Bias was most frequently introduced in the early ‘data to algorithm’ phase in machine learning and the ‘algorithm to user’ phase specifically with representation bias (jats:italicn</jats:italic> = 33) and evaluation bias (jats:italicn</jats:italic> = 29), respectively (Mehrabi et al. 2021). The review concludes with a discussion of the ethical implications for the field of AI and recommendations for future research.</jats:p>

Description

Keywords

Journal Title

Humanities and Social Sciences Communications

Conference Name

Journal ISSN

2662-9992

Volume Title

10

Publisher

Springer Science and Business Media LLC