Risk and the future of AI: algorithmic bias, data colonialism, and marginalization
Accepted version
Repository URI
Repository DOI
Change log
Authors
Abstract
Artificial Intelligence (AI), and discussions surrounding its potential uses, have escalated into polarised debates about the future, capturing a wide range of utopian and dystopian imaginations (Bove, 2023; Bucknall & Dori-Hacohen, 2022). In this editorial, we focus on the duality of risk around potential benefit and potential harm (Roff, 2019) associated with the future of AI. This is of critical importance in developing our understanding of AI as an emerging technology; one whose uses and effects are still uncertain and have yet to stabilize around a recognizable set of patterns (Bailey, Faraj, Hinds, Leonardi, & von Krogh, 2022). A particular focus is on algorithmic bias which is a direct function of the quality of the data that the AI algorithms are trained on, and that are effective only for those populations where there is access to training data. As we discuss further below, algorithmic bias has important implications for reshaping diversity, inclusion and marginalization. Additionally, we highlight the potential harm experienced by those largely invisible workers in the Global South who clean up data and refine algorithm development for the benefit of those using algorithms in the Global North. This emerging trend of data colonialism (Couldry & Mejias, 2019) is of critical importance given the fundamental reliance of AI technologies on data, and because organizations gain access to, control data, and develop algorithms in ways which not only emphasize the digital divide but increase data inequality (Zheng & Walsham, 2021). Based on these reflections, we propose a relational risk perspective as a useful lens in studying the dark side of AI around algorithmic bias, data colonialism, and increasing marginalization which is largely under-represented in the literature.