Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?
Philosophy and Technology
Springer Science and Business Media LLC
MetadataShow full item record
Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2019). Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?. Philosophy and Technology, 32 (4), 661-683. https://doi.org/10.1007/s13347-018-0330-6
We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better, and argue that at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The demands of practical reason require the justification of action to be pitched at the level of practical reason. Decision tools that support or supplant practical reasoning should not be expected to aim for a standard any higher than this. We cast this desideratum in terms of Daniel Dennett’s theory of the “intentional stance,” and argue that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form. In practice this means that the sorts of explanations for algorithmic decisions that are analogous to intentional stance explanations should be preferred over ones that aim at the architectural innards of a decision tool.
External DOI: https://doi.org/10.1007/s13347-018-0330-6
This record's URL: https://www.repository.cam.ac.uk/handle/1810/299973
All rights reserved