Show simple item record

dc.contributor.authorvan der Wilk, Mark
dc.date.accessioned2019-01-22T09:35:51Z
dc.date.available2019-01-22T09:35:51Z
dc.date.issued2019-01-26
dc.date.submitted2017-04-07
dc.identifier.urihttps://www.repository.cam.ac.uk/handle/1810/288347
dc.description.abstractMany tasks in machine learning require learning some kind of input-output relation (function), for example, recognising handwritten digits (from image to number) or learning the motion behaviour of a dynamical system like a pendulum (from positions and velocities now to future positions and velocities). We consider this problem using the Bayesian framework, where we use probability distributions to represent the state of uncertainty that a learning agent is in. In particular, we will investigate methods which use Gaussian processes to represent distributions over functions. Gaussian process models require approximations in order to be practically useful. This thesis focuses on understanding existing approximations and investigating new ones tailored to specific applications. We advance the understanding of existing techniques first through a thorough review. We propose desiderata for non-parametric basis function model approximations, which we use to assess the existing approximations. Following this, we perform an in-depth empirical investigation of two popular approximations (VFE and FITC). Based on the insights gained, we propose a new inter-domain Gaussian process approximation, which can be used to increase the sparsity of the approximation, in comparison to regular inducing point approximations. This allows GP models to be stored and communicated more compactly. Next, we show that inter-domain approximations can also allow the use of models which would otherwise be impractical, as opposed to improving existing approximations. We introduce an inter-domain approximation for the Convolutional Gaussian process – a model that makes Gaussian processes suitable to image inputs, and which has strong relations to convolutional neural networks. This same technique is valuable for approximating Gaussian processes with more general invariance properties. Finally, we revisit the derivation of the Gaussian process State Space Model, and discuss some subtleties relating to their approximation. We hope that this thesis illustrates some benefits of non-parametric models and their approximation in a non-parametric fashion, and that it provides models and approximations that prove to be useful for the development of more complex and performant models in the future.
dc.description.sponsorshipEPSRC DTA, Qualcomm Innovation Fellowship
dc.language.isoen
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectmachine learning
dc.subjectgaussian processes
dc.subjectconvolutional neural networks
dc.subjectbayes
dc.subjectvariational inference
dc.subjectinvariance
dc.subjectconvolutional gaussian processes
dc.subjectuncertainty
dc.subjectnon-parametric
dc.titleSparse Gaussian Process Approximations and Applications
dc.typeThesis
dc.type.qualificationlevelDoctoral
dc.type.qualificationnameDoctor of Philosophy (PhD)
dc.publisher.institutionUniversity of Cambridge
dc.publisher.departmentDepartment of Engineering
dc.date.updated2018-11-21T11:21:14Z
dc.identifier.doi10.17863/CAM.35660
dc.contributor.orcidvan der Wilk, Mark [0000-0001-7947-6682]
dc.publisher.collegeJesus
dc.type.qualificationtitlePhD in Machine Learning
cam.supervisorRasmussen, Carl Edward
cam.thesis.fundingfalse


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
Except where otherwise noted, this item's licence is described as Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)