Coalitional Bayesian autoencoders: Towards explainable unsupervised deep learning with applications to condition monitoring under covariate shift
View / Open Files
Publication Date
2022Journal Title
Applied Soft Computing
ISSN
1568-4946
Publisher
Elsevier BV
Type
Article
This Version
AM
Metadata
Show full item recordCitation
Yong, B., & Brintrup, A. (2022). Coalitional Bayesian autoencoders: Towards explainable unsupervised deep learning with applications to condition monitoring under covariate shift. Applied Soft Computing https://doi.org/10.1016/j.asoc.2022.108912
Abstract
This paper aims to improve the explainability of autoencoder (AE) pre-
dictions by proposing two novel explanation methods based on the mean and epistemic uncertainty of log-likelihood estimates, which naturally arise from the probabilistic formulation of the AE, the Bayesian autoencoder (BAE). These formulations contrast the conventional post-hoc explanation methods for AEs, which incur additional modelling effort and implementations. We further extend the methods for sensor-based explanations, aggregating the explanations at the sensor level instead of the lower feature level.
Embargo Lift Date
2023-04-01
Identifiers
External DOI: https://doi.org/10.1016/j.asoc.2022.108912
This record's URL: https://www.repository.cam.ac.uk/handle/1810/336630
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Licence URL: https://creativecommons.org/licenses/by-nc-nd/4.0/
Statistics
Total file downloads (since January 2020). For more information on metrics see the
IRUS guide.