dc.contributor.author Shah, Amar dc.date.accessioned 2019-08-05T08:46:41Z dc.date.available 2019-08-05T08:46:41Z dc.date.issued 2020-07-01 dc.date.submitted 2017-08-25 dc.identifier.uri https://www.repository.cam.ac.uk/handle/1810/295255 dc.description.abstract Optimisation is integral to all sorts of processes in science, economics and arguably underpins the fruition of human intelligence through millions of years of optimisation, or $\textit{evolution}$. Scarce resources make it crucial to maximise their efficient usage. In this thesis, we consider the task of maximising unknown functions which we are able to query point-wise. The function is deemed to be $\textit{costly}$ to evaluate e.g. larger run time or financial expense, requiring a judicious querying strategy given previous observations. We adopt a probabilistic framework for modelling the unknown function and Bayesian non-parametric modelling. In particular, we focus on the $\textit{Gaussian process}$ (GP), a popular non-parametric Bayesian prior on functions. We motivate these choices and give an overview of the Gaussian process in the introduction, and its application to $\textit{Bayesian optimisation}$. A GP's behaviour is intimately controlled by the choice of $\textit{kernel}$ or covariance function, typically chosen to be a parametric function. In chapter 2 we instead place a non-parametric Bayesian prior, known as an Inverse Wishart process prior, over a GP kernel function, and show that this may be marginalised analytically leading to a $\textit{Student-$t$process}$ (TP). Furthermore we explore a larger class of $\textit{elliptical processes}$, and show that the TP is the most general for which analytic calculation is possible, and apply it successfully for Bayesian optimisation. The remainder of the thesis focusses on various Bayesian optimisation settings. In chapter 3, we consider a setting where we are able to evaluate a function at multiple locations in parallel. Our approach is to consider a measure of information, $\textit{entropy}$, to decide which batch of points to evaluate a function at next. We similarly apply information gain for $\textit{multi-objective}$ Bayesian optimisation in chapter 4. Here, one wishes to find a $\textit{Pareto frontier}$ of efficient settings with respect to several different objectives through sequential evaluation. Finally, in chapter 5 we exploit the idea that in a multi-objective setting, functions are $\textit{correlated}$, incorporating this belief in our choice of prior distribution over the multiple objectives. dc.language.iso en dc.rights All rights reserved dc.rights All Rights Reserved en dc.rights.uri https://www.rioxx.net/licenses/all-rights-reserved/ en dc.subject machine learning dc.subject Bayesian optimisation dc.subject Bayesian dc.subject optimisation dc.subject sequential decision dc.subject single objective dc.subject multiple objective dc.subject Gaussian process dc.title Bayesian single- and multi- objective optimisation with nonparametric priors dc.type Thesis dc.type.qualificationlevel Doctoral dc.type.qualificationname Doctor of Philosophy (PhD) dc.publisher.institution University of Cambridge dc.publisher.department Engineering dc.date.updated 2019-08-02T22:19:50Z dc.identifier.doi 10.17863/CAM.42311 dc.type.qualificationtitle PhD in Machine Learning cam.supervisor Ghahramani, Zoubin cam.thesis.funding false
﻿