AI Paradigms and AI Safety: Mapping Artefacts and Techniques to Safety Issues


Type
Conference Object
Change log
Authors
Hernandez-Orallo, Jose 
Martinez-Plumed, Fernando 
Edgerton Avin, Shahar  ORCID logo  https://orcid.org/0000-0001-7859-1507
Whittlestone, Jessica 
O h´Eigeartaigh, Sean 
Abstract

AI safety often analyses a risk or safety issue, such as interruptibility, under a particular AI paradigm, such as reinforcement learning. But what is an AI paradigm and how does it affect the understanding and implications of the safety issue? Is AI safety research covering the most representative paradigms and the right combinations of paradigms with safety issues? Will current research directions in AI safety be able to anticipate more capable and powerful systems yet to come? In this paper we analyse these questions, introducing a distinction between two types of paradigms in AI: artefacts and techniques. We then use experimental data of research and media documents from AI Topics, an official publication of the AAAI, to examine how safety research is distributed across artefacts and techniques. We observe that AI safety research is not sufficiently anticipatory, and is heavily weighted towards certain research paradigms. We identify a need for AI safety to be more explicit about the artefacts and techniques for which a particular issue may be applicable, in order to identify gaps and cover a broader range of issues.

Description
Keywords
Journal Title
24th European Conference on Artificial Intelligence - ECAI 2020
Conference Name
24th European Conference on Artificial Intelligence (ECAI 2020)
Journal ISSN
0922-6389
1879-8314
Volume Title
325
Publisher
IOS Press