Advances in Bayesian Machine Learning: From Uncertainty to Decision Making
Repository URI
Repository DOI
Change log
Authors
Abstract
Bayesian uncertainty quantification is the key element to many machine learning applications. To this end, approximate inference algorithms are developed to perform inference at a relatively low cost. Despite the recent advancements of scaling approximate inference to “big model
In this work, we propose new research directions and new technical contributions towards these research questions. This thesis is organized in two parts (theme A and theme B). In theme A, we consider quantifying model uncertainty under the supervised learning setting. To step aside some of the difficulties of parameter-space inference, we propose a new research direction called function space approximate inference. That is, by treating supervised probabilistic models as stochastic processes (measures over functions), we can now approximate the true posterior of the predictive functions by another class of (simpler) stochastic processes. We provide two different methodologies for function space inference and demonstrate that they return better uncertainty estimates, as well as improved empirical performances on complicated models.
In theme B, we consider the quantification of missing data uncertainty under the unsupervised learning setting. We propose a new approach for quantifying missing data uncertainty, based on deep generative models. It allows us to step aside from the computational burden of traditional methods, and perform accurate and scalable missing data imputation. Furthermore, by utilizing the uncertainty estimates returned by the generative models, we propose an information-theoretic framework for efficient, scalable, and personalized active information acquisition. This allows us to maximally reduce missing data uncertainty, and make improved decisions with new information.