Repository logo
 

Algorithmic Decision-Making and the Control Problem

Published version
Peer-reviewed

Change log

Authors

Knott, Alistair 
Maclaurin, James 
Gavaghan, Colin 

Abstract

Abstract: The danger of human operators devolving responsibility to machines and failing to detect cases where they fail has been recognised for many years by industrial psychologists and engineers studying the human operators of complex machines. We call it “the control problem”, understood as the tendency of the human within a human–machine control loop to become complacent, over-reliant or unduly diffident when faced with the outputs of a reliable autonomous system. While the control problem has been investigated for some time, up to this point its manifestation in machine learning contexts has not received serious attention. This paper aims to fill that gap. We argue that, except in certain special circumstances, algorithmic decision tools should not be used in high-stakes or safety-critical decisions unless the systems concerned are significantly “better than human” in the relevant domain or subdomain of decision-making. More concretely, we recommend three strategies to address the control problem, the most promising of which involves a complementary (and potentially dynamic) coupling between highly proficient algorithmic tools and human agents working alongside one another. We also identify six key principles which all such human–machine systems should reflect in their design. These can serve as a framework both for assessing the viability of any such human–machine system as well as guiding the design and implementation of such systems generally.

Description

Keywords

Article, Control, Artificial intelligence, Human-in-the-loop, Human–machine systems, Human–computer interaction, Human factors, Ironies of automation, Machine learning

Journal Title

Minds and Machines

Conference Name

Journal ISSN

0924-6495
1572-8641

Volume Title

29

Publisher

Springer Netherlands
Sponsorship
New Zealand Law Foundation (2016/ILP/10)