The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation


Type
Report
Change log
Authors
Brundage, Miles 
Clark, Jack 
Toner, Helen 
Eckersley, Peter 
Abstract

This report surveys the landscape of potential security threats from malicious uses of AI, and proposes ways to better forecast, prevent, and mitigate these threats. After analyzing the ways in which AI may influence the threat landscape in the digital, physical, and political domains, we make four high-level recommendations for AI researchers and other stakeholders. We also suggest several promising areas for further research that could expand the portfolio of defenses, or make attacks less effective or harder to execute. Finally, we discuss, but do not conclusively resolve, the long-term equilibrium of attackers and defenders.

Description
Keywords
cs.AI, cs.AI, cs.CR, cs.CY
Is Part Of
Sponsorship
February Foundation (unknown)
Silicon Valley Community Foundation (via University of Oxford) (unknown)
Templeton World Charity Foundation (TWCF) (177155)
Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier Foundation, OpenAI. The Future of Life Institute is acknowledged as a funder.