Now showing items 1-20 of 22

• #### Bayesian generalised ensemble Markov chain Monte Carlo ﻿

(Microtome Publishing, 2016)
Bayesian generalised ensemble (BayesGE) is a new method that addresses two major drawbacks of standard Markov chain Monte Carlo algorithms for inference in high-dimensional probability models: inapplicability to estimate ...
• #### Bayesian Structured Prediction using Gaussian Processes ﻿

(IEEE, 2014-10-31)
We introduce a conceptually novel structured prediction model, GPstruct, which is kernelized, non-parametric and Bayesian, by design. We motivate the model with respect to existing approaches, among others, conditional ...
• #### Efficient Bayesian active learning and matrix modelling ﻿

(2014-11-11)
With the advent of the Internet and growth of storage capabilities, large collections of unlabelled data are now available. However, collecting supervised labels can be costly. Active learning addresses this by selecting, ...
• #### Gaussian processes for state space models and change point detection ﻿

(2012-02-07)
This thesis details several applications of Gaussian processes (GPs) for enhanced time series modeling. We first cover different approaches for using Gaussian processes in time series problems. These are extended to the ...
• #### A General Framework for Constrained Bayesian Optimization using Information-based Search ﻿

(MIT Press, 2016)
We present an information-theoretic framework for solving global black-box optimization problems that also have black-box constraints. Of particular interest to us is to efficiently solve problems with decoupled constraints, ...
• #### Generalised Bayesian matrix factorisation models ﻿

(2011-03-15)
Factor analysis and related models for probabilistic matrix factorisation are of central importance to the unsupervised analysis of data, with a colourful history more than a century long. Probabilistic models for matrix ...
• #### Improving PPM with dynamic parameter updates ﻿

(2015-03-25)
This article makes several improvements to the classic PPM algorithm, resulting in a new algorithm with superior compression effectiveness on human text. The key differences of our algorithm to classic PPM are that (A) ...
• #### Improving PPM with dynamic parameter updates ﻿

(IEEE, 2015)
This article makes several improvements to the classic PPM algorithm, resulting in a new algorithm with superior compression effectiveness on human text. The key differences of our algorithm to classic PPM are that (A) ...
• #### Latent Gaussian Processes for Distribution Estimation of Multivariate Categorical Data ﻿

(Microtome Publishing, 2015)
Multivariate categorical data occur in many applications of machine learning. One of the main difficulties with these vectors of categorical variables is sparsity. The number of possible observations grows exponentially ...
• #### Linear Dimensionality Reduction: Survey, Insights, and Generalizations ﻿

(MIT Press, 2015-12-01)
Linear dimensionality reduction methods are a cornerstone of analyzing high dimensional data, due to their simple geometric interpretations and typically attractive computational properties. These methods capture many data ...
• #### MCMC for Variationally Sparse Gaussian Processes ﻿

(Neural Information Processing Systems Foundation, 2015-12-07)
Gaussian process (GP) models form a core part of probabilistic machine learning. Considerable research effort has been made into attacking three issues with GP models: how to compute efficiently when the number of data is ...
• #### The Mondrian Kernel ﻿

We introduce the Mondrian kernel, a fast $\textit{random feature}$ approximation to the Laplace kernel. It is suitable for both batch and online learning, and admits a fast kernel-width-selection procedure as the random ...
• #### Neural Adaptive Sequential Monte Carlo ﻿

(Curran Associates, 2015)
Sequential Monte Carlo (SMC), or particle filtering, is a popular class of methods for sampling from an intractable target distribution using a sequence of simpler intermediate distributions. Like other importance ...
• #### Particle Gibbs for Infinite Hidden Markov Models ﻿

(Curran Associates, 2015-12-18)
Infinite Hidden Markov Models (iHMM’s) are an attractive, nonparametric generalization of the classical Hidden Markov Model which can automatically infer the number of hidden states in the system. However, due to the ...
• #### Practical Probabilistic Programming with Monads ﻿

(ACM, 2015-07-30)
The machine learning community has recently shown a lot of interest in practical probabilistic programming systems that target the problem of Bayesian inference. Such systems come in different forms, but they all express ...
• #### Predictive Entropy Search for Bayesian Optimization with Unknown Constraints ﻿

(JMLR, 2015-06-01)
Unknown constraints arise in many types of expensive black-box optimization problems. Several methods have been proposed recently for performing Bayesian optimization with constraints, based on the expected improvement ...
• #### Probabilistic machine learning and artificial intelligence ﻿

(NPG, 2015-05-27)
How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing ...
• #### R/BHC: fast Bayesian hierarchical clustering for microarray data ﻿

(2009-08-06)
Abstract Background Although the use of clustering methods has rapidly become one of the standard computational approaches in the literature of microarray gene expression data analysis, little attention has been paid to ...
• #### Scalable Discrete Sampling as a Multi-Armed Bandit Problem ﻿

(2016)
Drawing a sample from a discrete distribution is one of the building components for Monte Carlo methods. Like other sampling algorithms, discrete sampling suffers from the high computational burden in large-scale ...
• #### Scalable Variational Gaussian Process Classification ﻿

(JMLR, 2015-02-21)
Gaussian process classification is a popular method with a number of appealing properties. We show how to scale the model within a variational inducing point framework, outperforming the state of the art on benchmark ...