Repository logo
 

Bayesian autoencoders for anomaly detection: Design, uncertainty quantification, and explainability with industrial applications


Type

Thesis

Change log

Authors

Yong, Bang Xiang 

Abstract

Detection of anomalies is a vital aspect of many industrial applications, such as condition monitoring, quality monitoring, and demand forecasting, among others. The increasing capabilities of data storage and processing systems have facilitated the use of powerful models such as autoencoders (AEs), a class of neural networks (NNs), to achieve state-of-the-art results in anomaly detection. Nevertheless, there are growing concerns regarding the safety and trustworthiness of AEs, as recent studies have reported the surprising failures of AEs on seemingly trivial benchmarks; existing AEs also lack capabilities to quantify uncertainty and explain why a prediction is made, eroding trust in their adoption. To address these research gaps, this thesis contributes to the development of Bayesian autoencoders (BAEs) in three ways: (1) formulation and design, (2) uncertainty quantification, and (3) explainability. The BAEs ground design and analysis on a well-studied probabilistic foundation and implement Bayesian model averaging to improve detection performance. This thesis compares various design choices of BAEs. The use of Bernoulli likelihood is found to cause unreliable performance; alternative likelihood fixes this. In addition, using non-bottlenecked architectures improves performance, contradicting conventional belief of the need for a bottleneck. Next, the formulation of BAEs is extended to quantify the uncertainty of anomaly detection, capturing both epistemic and aleatoric components. Communicating uncertainty is necessary for knowing when the predictions are doubtful; filtering away uncertain predictions leaves us with more accurate predictions. To improve the explainability of BAEs, two feature attribution methods are developed based on the mean and epistemic uncertainty of log-likelihood estimates. This work proposes "Coalitional BAE" to improve explainability by reducing misleading explanations stemming from correlated outputs. The BAEs are applied to benchmark datasets and industrial case studies for condition monitoring and quality inspection. The proposed BAEs significantly outperform the deterministic AEs for anomaly detection in terms of accuracy, uncertainty quantification, and explainability. Future pilot studies should investigate the limitations and feasibility of deploying the methods in real-world systems.

Description

Date

2022-08-01

Advisors

Brintrup, Alexandra

Keywords

anomaly detection, bayesian autoencoder, explainable ai, uncertainty quantification, unsupervised neural networks

Qualification

Doctor of Philosophy (PhD)

Awarding Institution

University of Cambridge
Sponsorship
European Association of Metrology Institutes (EURAMET) (17IND12)
Public Service Department Malaysia