Repository logo
 

On Security of Machine Learning


Type

Thesis

Change log

Authors

Shumailov, Ilia 

Abstract

Recent advances in machine learning (ML) changed the world. Where humans used to dictate the rules, now machines hoard data and make decisions. Although this change has brought real benefits, it has automated a significant amount human-based interaction, opening it up to manipulation. Research has established that machine-learning models are extremely vulnerable to adversarial perturbations, and particularly to changes to their inputs that are imperceptible to humans but force them to behave in unexpected ways. In this dissertation we take a rather unorthodox approach to ML security, and look at the current state of machine learning through the lens of computer security. As a result, we find a large number of new attacks and problems lurking at the intersection of systems security and machine learning. In what follows, we describe the current state of the literature, highlight where we are still missing important knowledge, and describe several novel contributions to the field. We find that some characteristics of the field make current security methodology much less applicable, leaving modern ML systems vulnerable to an extremely wide variety of attacks. Our main contribution comes in the form of availability attacks on ML -- attacks that target latency of inference or model training. We also explain how there are plenty other intersections with the model environment that could be exploited by an attacker. One important insight is that the inherent limitations of ML models must be understood, acknowledged, and mitigated by compensating controls in the larger systems that use them as components.

Description

Date

2021-09-30

Advisors

Anderson, Ross

Keywords

computer security, machine learning

Qualification

Doctor of Philosophy (PhD)

Awarding Institution

University of Cambridge
Sponsorship
Bosch Research Foundation (Bosch-Forschungsstiftung im Stifter- verband)