Variational implicit processes
View / Open Files
Authors
Ma, C
Li, Y
Hernández-Lobato, JM
Publication Date
2019Journal Title
36th International Conference on Machine Learning, ICML 2019
ISSN
2640-3498
ISBN
9781510886988
Volume
2019-June
Pages
7464-7482
Language
English
Type
Conference Object
This Version
AM
Metadata
Show full item recordCitation
Ma, C., Li, Y., & Hernández-Lobato, J. (2019). Variational implicit processes. 36th International Conference on Machine Learning, ICML 2019, 2019-June 7464-7482. https://doi.org/10.17863/CAM.42186
Abstract
We introduce the implicit processes (IPs), a stochastic process that places
implicitly defined multivariate distributions over any finite collections of
random variables. IPs are therefore highly flexible implicit priors over
functions, with examples including data simulators, Bayesian neural networks
and non-linear transformations of stochastic processes. A novel and efficient
approximate inference algorithm for IPs, namely the variational implicit
processes (VIPs), is derived using generalised wake-sleep updates. This method
returns simple update equations and allows scalable hyper-parameter learning
with stochastic optimization. Experiments show that VIPs return better
uncertainty estimates and lower errors over existing inference methods for
challenging models such as Bayesian neural networks, and Gaussian processes.
Identifiers
External DOI: https://doi.org/10.17863/CAM.42186
This record's URL: https://www.repository.cam.ac.uk/handle/1810/295114
Rights
Licence:
http://www.rioxx.net/licenses/all-rights-reserved
Statistics
Total file downloads (since January 2020). For more information on metrics see the
IRUS guide.