'Explaining' machine learning reveals policy challenges
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Coyle, Diane https://orcid.org/0000-0001-7243-1641
Weller, Adrian https://orcid.org/0000-0003-1915-7158
Abstract
There is a growing demand to be able to “explain” machine learning (ML) systems’ decisions and actions to human users, par-ticularly when used in contexts where deci-sions have substantial implications for those affected and where there is a requirement for political accountability or legal compli-ance. Explainability is often discussed as a technical challenge in designing ML systems and decision procedures, to improve under-standing of what is typically a “black box” phenomenon. But some of the most difficult challenges are non-technical and raise ques-tions about the broader accountability of organizations using ML in their decision-making.
Description
Keywords
46 Information and Computing Sciences, 4407 Policy and Administration, 4408 Political Science, 44 Human Society, Machine Learning and Artificial Intelligence, Clinical Research, 4 Quality Education
Journal Title
Science
Conference Name
Journal ISSN
0036-8075
1095-9203
1095-9203
Volume Title
368
Publisher
AAAS
Publisher DOI
Sponsorship
Leverhulme Trust (RC-2015-067)
Alan Turing Institute (Unknown)
Alan Turing Institute (Unknown)
David MacKay Newton research fellowship at Darwin College
Leverhulme Trust via CFI