Repository logo
 

Explainable Machine Learning, Patient Autonomy, and Clinical Reasoning

Accepted version
Peer-reviewed

Type

Book chapter

Change log

Authors

Keeling, Geoff 

Abstract

Clinical decision support systems based on complex machine learning models render opaque the rationale and value commitments that underpin diagnoses and suggested treatments. This creates a tension with the prevailing view in medical ethics, which emphasises patients making autonomous decisions based on an understanding of relevant medical evidence alongside their beliefs and values. Calls for algorithmic explainability in clinical settings are partly motivated by this tension. The question is what needs to be explained, to whom, in what way, and when, to integrate machine learning systems into the clinical process in a way that is consistent with patient-centred decision-making. In this chapter, we review the tension and argue that answers to these questions depend on more fundamental issues in the philosophy of medicine regarding the logic of clinical reasoning. We outline and defend a broadly Peircean account which, we argue, captures ethically salient aspects of the interplay between clinicians and decision support systems, and use it to shed light on the particulars of the explainability challenge.

Description

Title

Explainable Machine Learning, Patient Autonomy, and Clinical Reasoning

Keywords

Abduction, Algorithmic Opacity, C.S. Peirce, Clinical Reasoning, Explainability, Informed Consent, Patient Autonomy, Shared Decision Making

Is Part Of

Oxford Handbook of Digital Ethics

Book type

Edited volume

Publisher

Oxford University Press

ISBN

9780191890437
Sponsorship
Wellcome Trust (213660/Z/18/Z)
Leverhulme Trust (RC-2015-067)
Leverhulme Trust (RC-2015-067)