Repository logo
 

The Limits of Value Transparency in Machine Learning

Accepted version
Peer-reviewed

Type

Article

Change log

Abstract

jats:titleAbstract</jats:title>jats:pTransparency has been proposed as a way of handling value-ladenness in machine learning (ML). This article highlights limits to this strategy. I distinguish three kinds of transparency: epistemic transparency, retrospective value transparency, and prospective value transparency. This corresponds to different approaches to transparency in ML, including so-called explainable artificial intelligence and governance based on disclosing information about the design process. I discuss three sources of value-ladenness in ML—problem formulation, inductive risk, and specification gaming—and argue that retrospective value transparency is only well-suited for dealing with the first, while the third raises serious challenges even for prospective value transparency.</jats:p>

Description

Keywords

5003 Philosophy, 50 Philosophy and Religious Studies, 5002 History and Philosophy Of Specific Fields, Data Science, Machine Learning and Artificial Intelligence, Networking and Information Technology R&D (NITRD)

Journal Title

PHILOSOPHY OF SCIENCE

Conference Name

Journal ISSN

0031-8248
1539-767X

Volume Title

Publisher

Cambridge University Press (CUP)
Sponsorship
Wellcome Trust (213660/Z/18/Z)
Leverhulme Trust (RC-2015-067)
Leverhulme Trust (RC-2015-067)
This research was funded in whole, or in part, by the Wellcome Trust [Grant number 213660/Z/18/Z] and the Leverhulme Trust, through the Leverhulme Centre for the Future of Intelligence.