Repository logo
 

Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?

Accepted version
Peer-reviewed

No Thumbnail Available

Type

Article

Change log

Authors

Knott, A 
Maclaurin, J 
Gavaghan, C 

Abstract

We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better, and argue that at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The demands of practical reason require the justification of action to be pitched at the level of practical reason. Decision tools that support or supplant practical reasoning should not be expected to aim for a standard any higher than this. We cast this desideratum in terms of Daniel Dennett’s theory of the “intentional stance,” and argue that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form. In practice this means that the sorts of explanations for algorithmic decisions that are analogous to intentional stance explanations should be preferred over ones that aim at the architectural innards of a decision tool.

Description

Keywords

5003 Philosophy, 50 Philosophy and Religious Studies, 16 Peace, Justice and Strong Institutions

Journal Title

Philosophy and Technology

Conference Name

Journal ISSN

2210-5433
2210-5441

Volume Title

32

Publisher

Springer Science and Business Media LLC

Rights

All rights reserved