Explainable Machine Learning, Patient Autonomy, and Clinical Reasoning
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
Clinical decision support systems based on complex machine learning models render opaque the rationale and value commitments that underpin diagnoses and suggested treatments. This creates a tension with the prevailing view in medical ethics, which emphasises patients making autonomous decisions based on an understanding of relevant medical evidence alongside their beliefs and values. Calls for algorithmic explainability in clinical settings are partly motivated by this tension. The question is what needs to be explained, to whom, in what way, and when, to integrate machine learning systems into the clinical process in a way that is consistent with patient-centred decision-making. In this chapter, we review the tension and argue that answers to these questions depend on more fundamental issues in the philosophy of medicine regarding the logic of clinical reasoning. We outline and defend a broadly Peircean account which, we argue, captures ethically salient aspects of the interplay between clinicians and decision support systems, and use it to shed light on the particulars of the explainability challenge.
Description
Title
Keywords
Is Part Of
Book type
Publisher
Publisher DOI
ISBN
Rights
Sponsorship
Leverhulme Trust (RC-2015-067)
Leverhulme Trust (RC-2015-067)