Repository logo
 

'Explaining' machine learning reveals policy challenges

Accepted version
Peer-reviewed

Type

Article

Change log

Abstract

There is a growing demand to be able to “explain” machine learning (ML) systems’ decisions and actions to human users, par-ticularly when used in contexts where deci-sions have substantial implications for those affected and where there is a requirement for political accountability or legal compli-ance. Explainability is often discussed as a technical challenge in designing ML systems and decision procedures, to improve under-standing of what is typically a “black box” phenomenon. But some of the most difficult challenges are non-technical and raise ques-tions about the broader accountability of organizations using ML in their decision-making.

Description

Keywords

46 Information and Computing Sciences, 4407 Policy and Administration, 4408 Political Science, 44 Human Society, Machine Learning and Artificial Intelligence, Clinical Research, 4 Quality Education

Journal Title

Science

Conference Name

Journal ISSN

0036-8075
1095-9203

Volume Title

368

Publisher

AAAS
Sponsorship
Leverhulme Trust (RC-2015-067)
Alan Turing Institute (Unknown)
David MacKay Newton research fellowship at Darwin College Leverhulme Trust via CFI