Now showing items 7-25 of 25

• A General Framework for Constrained Bayesian Optimization using Information-based Search ﻿

(MIT Press, 2016)
We present an information-theoretic framework for solving global black-box optimization problems that also have black-box constraints. Of particular interest to us is to efficiently solve problems with decoupled constraints, ...
• A General Framework for Constrained Bayesian Optimization using Information-based Search ﻿

(Journal of Machine Learning Research, 2016)
We present an information-theoretic framework for solving global black-box optimization problems that also have black-box constraints. Of particular interest to us is to efficiently solve problems with $\textit{decoupled}$ ...
• Improving PPM with dynamic parameter updates ﻿

(IEEE, 2015)
This article makes several improvements to the classic PPM algorithm, resulting in a new algorithm with superior compression effectiveness on human text. The key differences of our algorithm to classic PPM are that (A) ...
• Improving PPM with dynamic parameter updates ﻿

(2015-03-25)

(2015)
• Linear Dimensionality Reduction: Survey, Insights, and Generalizations ﻿

(MIT Press, 2015-12-01)
• Lost Relatives of the Gumbel Trick ﻿

The Gumbel trick is a method to sample from a discrete probability distribution, or to estimate its normalizing partition function. The method re- lies on repeatedly applying a random perturbation to the distribution in a ...
• MCMC for Variationally Sparse Gaussian Processes ﻿

(Neural Information Processing Systems Foundation, 2015-12-07)
• The Mondrian Kernel ﻿

(Association for Uncertainty in Artificial Intelligence Press, 2016-06-29)
We introduce the Mondrian kernel, a fast $\textit{random feature}$ approximation to the Laplace kernel. It is suitable for both batch and online learning, and admits a fast kernel-width-selection procedure as the random ...
• Neural Adaptive Sequential Monte Carlo ﻿

(Curran Associates, 2015)
Sequential Monte Carlo (SMC), or particle filtering, is a popular class of methods for sampling from an intractable target distribution using a sequence of simpler intermediate distributions. Like other importance ...

• Particle Gibbs for Infinite Hidden Markov Models ﻿

(Curran Associates, 2015-12-18)
Infinite Hidden Markov Models (iHMM’s) are an attractive, nonparametric generalization of the classical Hidden Markov Model which can automatically infer the number of hidden states in the system. However, due to the ...
• Practical Probabilistic Programming with Monads ﻿

(2015-07-30)
• Predictive Entropy Search for Bayesian Optimization with Unknown Constraints ﻿

(2015-06-01)
• Probabilistic machine learning and artificial intelligence ﻿

(2015-05-27)
• R/BHC: fast Bayesian hierarchical clustering for microarray data ﻿

(BioMed Central, 2009-08-06)

(2016)
• Scalable Variational Gaussian Process Classification ﻿

(2015-02-21)