Now showing items 2-21 of 29

• #### Bayesian generalised ensemble Markov chain Monte Carlo ﻿

(Microtome Publishing, 2016)
Bayesian generalised ensemble (BayesGE) is a new method that addresses two major drawbacks of standard Markov chain Monte Carlo algorithms for inference in high-dimensional probability models: inapplicability to estimate ...
• #### Bayesian inference on random simple graphs with power law degree distributions ﻿

We present a model for random simple graphs with power law (i.e., heavy-tailed) degree dis- tributions. To attain this behavior, the edge probabilities in the graph are constructed from Bertoin–Fujita–Roynette–Yor (BFRY) ...
• #### Bayesian Structured Prediction using Gaussian Processes ﻿

(IEEE, 2014-10-31)
• #### A Birth-Death Process for Feature Allocation. ﻿

(2017)
We propose a Bayesian nonparametric prior over feature allocations for sequential data, the birth- death feature allocation process (BDFP). The BDFP models the evolution of the feature allocation of a set of N objects ...
• #### Deep Bayesian Active Learning with Image Data ﻿

Even though active learning forms an important pillar of machine learning, deep learning tools are not prevalent within it. Deep learning poses several difficulties when used in an active learning setting. First, active ...
• #### Denotational validation of higher-order Bayesian inference ﻿

We present a modular semantic account of Bayesian inference algorithms for probabilistic programming lan- guages, as used in data science and machine learning. Sophisticated inference algorithms are often explained in terms ...
• #### A General Framework for Constrained Bayesian Optimization using Information-based Search ﻿

(MIT Press, 2016-09-24)
We present an information-theoretic framework for solving global black-box optimization problems that also have black-box constraints. Of particular interest to us is to efficiently solve problems with decoupled constraints, ...
• #### A General Framework for Constrained Bayesian Optimization using Information-based Search ﻿

(Journal of Machine Learning Research, 2016-09-24)
We present an information-theoretic framework for solving global black-box optimization problems that also have black-box constraints. Of particular interest to us is to efficiently solve problems with $\textit{decoupled}$ ...
• #### Improving PPM with dynamic parameter updates ﻿

(IEEE, 2015)
This article makes several improvements to the classic PPM algorithm, resulting in a new algorithm with superior compression effectiveness on human text. The key differences of our algorithm to classic PPM are that (A) ...
• #### Improving PPM with dynamic parameter updates ﻿

(2015-03-25)

• #### Latent Gaussian Processes for Distribution Estimation of Multivariate Categorical Data ﻿

(Microtome Publishing, 2015)
Multivariate categorical data occur in many applications of machine learning. One of the main difficulties with these vectors of categorical variables is sparsity. The number of possible observations grows exponentially ...
• #### Linear Dimensionality Reduction: Survey, Insights, and Generalizations ﻿

(MIT Press, 2015-12-01)
Linear dimensionality reduction methods are a cornerstone of analyzing high dimensional data, due to their simple geometric interpretations and typically attractive computational properties. These methods capture many data ...
• #### Lost Relatives of the Gumbel Trick ﻿

The Gumbel trick is a method to sample from a discrete probability distribution, or to estimate its normalizing partition function. The method re- lies on repeatedly applying a random perturbation to the distribution in a ...
• #### MCMC for Variationally Sparse Gaussian Processes ﻿

(Neural Information Processing Systems Foundation, 2015-12-07)
Gaussian process (GP) models form a core part of probabilistic machine learning. Considerable research effort has been made into attacking three issues with GP models: how to compute efficiently when the number of data is ...
• #### The Mondrian Kernel ﻿

(Association for Uncertainty in Artificial Intelligence Press, 2016-06-29)
We introduce the Mondrian kernel, a fast $\textit{random feature}$ approximation to the Laplace kernel. It is suitable for both batch and online learning, and admits a fast kernel-width-selection procedure as the random ...
• #### Neural Adaptive Sequential Monte Carlo ﻿

(Curran Associates, 2015)
Sequential Monte Carlo (SMC), or particle filtering, is a popular class of methods for sampling from an intractable target distribution using a sequence of simpler intermediate distributions. Like other importance ...
• #### Neural adaptive sequential Monte Carlo ﻿

(2015-01-01)