Repository logo
 

Big bang, low bar-risk assessment in the public arena.

Published version
Peer-reviewed

Repository DOI


Change log

Abstract

One of the basic principles of risk management is that we should always keep an eye on ways that things could go badly wrong, even if they seem unlikely. The more disastrous a potential failure, the more improbable it needs to be, before we can safely ignore it. This principle may seem obvious, but it is easily overlooked in public discourse about risk, even by well-qualified commentators who should certainly know better. The present piece is prompted by neglect of the principle in recent discussions about the potential existential risks of artificial intelligence. The failing is not peculiar to this case, but recent debates in this area provide some particularly stark examples of how easily the principle can be overlooked.

Description

Peer reviewed: True


Publication status: Published

Keywords

artificial intelligence, public discourse about science policy, risk management

Journal Title

R Soc Open Sci

Conference Name

Journal ISSN

2054-5703
2054-5703

Volume Title

11

Publisher

The Royal Society