Situational Awareness and Adversarial Machine Learning -- Robots, Manners, and Stress
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
Both humans and animals infer intent -- a dog knows the difference between a kick and a stumble. Over thousands of generations, we have evolved biological and cultural mechanisms to quickly assess the threat posed by another human or an animal, and animals who interact with humans have similar mechanisms. We also have a keen awareness of whether our environment is friendly or hostile. As robots, and other automata that rely on machine learning, become widespread, they will raise similar but more complex questions of signaling and detecting intent. In recent research, we have been exploring how adversarial samples can be detected more easily than they can be blocked, allowing systems to fall back to more cautious modes of operation. The interaction between machine-learning components and service-denial attacks is a fascinating subject that few have studied so far. In short, while classical system resilience may be seen in terms of layered defence and redundancy, that of machine-learning systems may be much more human. Combining the two intelligently could be a new frontier for research, with a focus on situational awareness. We may see new security protocols, as the communication of intent may become more important than the communication of identity not only for the security and safety of interaction between humans and robots, but also between robots and the wider environment. This gives a new and perhaps more realistic angle on both robot ethics and adversarial machine learning.