Show simple item record

dc.contributor.authorDai, Zen
dc.contributor.authorDamianou, Aen
dc.contributor.authorGonzález, Jen
dc.contributor.authorLawrence, Neilen
dc.date.accessioned2020-01-15T00:30:26Z
dc.date.available2020-01-15T00:30:26Z
dc.date.issued2016-01-01en
dc.identifier.urihttps://www.repository.cam.ac.uk/handle/1810/300882
dc.description.abstract© ICLR 2016: San Juan, Puerto Rico. All Rights Reserved. We develop a scalable deep non-parametric generative model by augmenting deep Gaussian processes with a recognition model. Inference is performed in a novel scalable variational framework where the variational posterior distributions are reparametrized through a multilayer perceptron. The key aspect of this reformulation is that it prevents the proliferation of variational parameters which otherwise grow linearly in proportion to the sample size. We derive a new formulation of the variational lower bound that allows us to distribute most of the computation in a way that enables to handle datasets of the size of mainstream deep learning tasks. We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization.
dc.rightsAll rights reserved
dc.rights.uri
dc.titleVariational auto-encoded deep Gaussian processesen
dc.typeConference Object
prism.publicationDate2016en
prism.publicationName4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedingsen
dc.identifier.doi10.17863/CAM.47957
rioxxterms.versionVoR
rioxxterms.licenseref.urihttp://www.rioxx.net/licenses/all-rights-reserveden
rioxxterms.licenseref.startdate2016-01-01en
dc.contributor.orcidLawrence, Neil [0000-0001-9258-1030]
rioxxterms.typeConference Paper/Proceeding/Abstracten


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record