Efficient Deterministic Approximate Bayesian Inference for Gaussian Process models
Bui, Thang Duc
Turner, Richard E.
University of Cambridge
Doctor of Philosophy (PhD)
MetadataShow full item record
Bui, T. D. (2018). Efficient Deterministic Approximate Bayesian Inference for Gaussian Process models (Doctoral thesis). https://doi.org/10.17863/CAM.20913
Gaussian processes are powerful nonparametric distributions over continuous functions that have become a standard tool in modern probabilistic machine learning. However, the applicability of Gaussian processes in the large-data regime and in hierarchical probabilistic models is severely limited by analytic and computational intractabilities. It is, therefore, important to develop practical approximate inference and learning algorithms that can address these challenges. To this end, this dissertation provides a comprehensive and unifying perspective of pseudo-point based deterministic approximate Bayesian learning for a wide variety of Gaussian process models, which connects previously disparate literature, greatly extends them and allows new state-of-the-art approximations to emerge. We start by building a posterior approximation framework based on Power-Expectation Propagation for Gaussian process regression and classification. This framework relies on a structured approximate Gaussian process posterior based on a small number of pseudo-points, which is judiciously chosen to summarise the actual data and enable tractable and efficient inference and hyperparameter learning. Many existing sparse approximations are recovered as special cases of this framework, and can now be understood as performing approximate posterior inference using a common approximate posterior. Critically, extensive empirical evidence suggests that new approximation methods arisen from this unifying perspective outperform existing approaches in many real-world regression and classification tasks. We explore the extensions of this framework to Gaussian process state space models, Gaussian process latent variable models and deep Gaussian processes, which also unify many recently developed approximation schemes for these models. Several mean-field and structured approximate posterior families for the hidden variables in these models are studied. We also discuss several methods for approximate uncertainty propagation in recurrent and deep architectures based on Gaussian projection, linearisation, and simple Monte Carlo. The benefit of the unified inference and learning frameworks for these models are illustrated in a variety of real-world state-space modelling and regression tasks.
machine learning, Gaussian process, approximate inference, Bayesian statistics, supervised learning, unsupervised learning
This record's DOI: https://doi.org/10.17863/CAM.20913
Attribution-NonCommercial-ShareAlike 4.0 International
Licence URL: https://creativecommons.org/licenses/by-nc-sa/4.0/