Repository logo
 

GETTING A CLUE: A METHOD FOR EXPLAINING UNCERTAINTY ESTIMATES

Accepted version
Peer-reviewed

Type

Conference Object

Change log

Authors

Antorán, J 
Bhatt, U 
Adel, T 
Hernández-Lobato, JM 

Abstract

Both uncertainty estimation and interpretability are important factors for trustworthy machine learning systems. However, there is little work at the intersection of these two areas. We address this gap by proposing a novel method for interpreting uncertainty estimates from differentiable probabilistic models, like Bayesian Neural Networks (BNNs). Our method, Counterfactual Latent Uncertainty Explanations (CLUE), indicates how to change an input, while keeping it on the data manifold, such that a BNN becomes more confident about the input's prediction. We validate CLUE through 1) a novel framework for evaluating counterfactual explanations of uncertainty, 2) a series of ablation experiments, and 3) a user study. Our experiments show that CLUE outperforms baselines and enables practitioners to better understand which input patterns are responsible for predictive uncertainty.

Description

Keywords

Journal Title

ICLR 2021 - 9th International Conference on Learning Representations

Conference Name

ICLR 2021: Ninth International Conference on Learning Representations

Journal ISSN

Volume Title

Publisher

Rights

All rights reserved