Show simple item record

dc.contributor.authorZerilli, John
dc.contributor.authorKnott, Alistair
dc.contributor.authorMaclaurin, James
dc.contributor.authorGavaghan, Colin
dc.date.accessioned2020-12-10T16:20:31Z
dc.date.available2020-12-10T16:20:31Z
dc.date.issued2019-12-11
dc.date.submitted2019-01-23
dc.identifier.issn0924-6495
dc.identifier.others11023-019-09513-7
dc.identifier.other9513
dc.identifier.urihttps://www.repository.cam.ac.uk/handle/1810/314956
dc.description.abstractAbstract: The danger of human operators devolving responsibility to machines and failing to detect cases where they fail has been recognised for many years by industrial psychologists and engineers studying the human operators of complex machines. We call it “the control problem”, understood as the tendency of the human within a human–machine control loop to become complacent, over-reliant or unduly diffident when faced with the outputs of a reliable autonomous system. While the control problem has been investigated for some time, up to this point its manifestation in machine learning contexts has not received serious attention. This paper aims to fill that gap. We argue that, except in certain special circumstances, algorithmic decision tools should not be used in high-stakes or safety-critical decisions unless the systems concerned are significantly “better than human” in the relevant domain or subdomain of decision-making. More concretely, we recommend three strategies to address the control problem, the most promising of which involves a complementary (and potentially dynamic) coupling between highly proficient algorithmic tools and human agents working alongside one another. We also identify six key principles which all such human–machine systems should reflect in their design. These can serve as a framework both for assessing the viability of any such human–machine system as well as guiding the design and implementation of such systems generally.
dc.languageen
dc.publisherSpringer Netherlands
dc.rightsAttribution 4.0 International (CC BY 4.0)en
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en
dc.subjectArticle
dc.subjectControl
dc.subjectArtificial intelligence
dc.subjectHuman-in-the-loop
dc.subjectHuman–machine systems
dc.subjectHuman–computer interaction
dc.subjectHuman factors
dc.subjectIronies of automation
dc.subjectMachine learning
dc.titleAlgorithmic Decision-Making and the Control Problem
dc.typeArticle
dc.date.updated2020-12-10T16:20:30Z
prism.endingPage578
prism.issueIdentifier4
prism.publicationNameMinds and Machines
prism.startingPage555
prism.volume29
dc.identifier.doi10.17863/CAM.62063
dcterms.dateAccepted2019-12-03
rioxxterms.versionofrecord10.1007/s11023-019-09513-7
rioxxterms.versionVoR
rioxxterms.licenseref.urihttp://creativecommons.org/licenses/by/4.0/
dc.contributor.orcidZerilli, John [0000-0002-7010-2278]
dc.identifier.eissn1572-8641
pubs.funder-project-idNew Zealand Law Foundation (2016/ILP/10)


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International (CC BY 4.0)
Except where otherwise noted, this item's licence is described as Attribution 4.0 International (CC BY 4.0)