Repository logo
 

Explainable AI lacks regulative reasons: why AI and human decision-making are not equally opaque

Published version
Peer-reviewed

Repository DOI


Change log

Abstract

AbstractMany artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument overlooks that human decision-making is sometimes significantly more transparent and trustworthy than algorithmic decision-making. This is because when people explain their decisions by giving reasons for them, this frequently prompts those giving the reasons to govern or regulate themselves so as to think and act in ways that confirm their reason reports. AI explanation systems lack this self-regulative feature. Overlooking it when comparing algorithmic and human decision-making can result in underestimations of the transparency of human decision-making and in the development of explainable AI that may mislead people by activating generally warranted beliefs about the regulative dimension of reason-giving.

Description

Funder: Rheinische Friedrich-Wilhelms-Universität Bonn (1040)

Journal Title

AI and Ethics

Conference Name

Journal ISSN

2730-5953
2730-5961

Volume Title

3

Publisher

Springer Science and Business Media LLC

Rights and licensing

Except where otherwised noted, this item's license is described as http://creativecommons.org/licenses/by/4.0/