Repository logo
 

Strong and weak principles of Bayesian machine learning for systems neuroscience


Type

Thesis

Change log

Authors

Jensen, Kristopher 

Abstract

Neuroscientists are recording neural activity and behaviour at a rapidly increasing scale. This provides an unprecedented window into the neural underpinnings of behaviour, while also pushing the need for new techniques to analyse and model these large-scale datasets. Inspiration for such tools can be found in the Bayesian machine learning literature, which provides a set of principled techniques that allow us to perform inference in complex problem settings with large parameter spaces. When applied to neural population recordings, we propose that these approaches can be divided into ‘weak’ and ‘strong’ models of neural data. The weak models consist of tools for analysing experimental data, which build our own prior knowledge of neural circuits directly into the analysis pipeline. In contrast, strong Bayesian models of neural dynamics posit that the brain itself performs something akin to Bayesian inference. In this view, we can interpret our Bayesian machine learning models as algorithmic or mechanistic models of the learning processes and computations taking place in the biological brain. In this work, we first provide an overview of Bayesian machine learning and its applications to neuroscience, highlighting how both the strong and weak approaches have improved our understanding of neural computations in recent years. We then develop several new models in this field, which provide insights into neural computations ranging from motor control to navigation and decision making. These models can be grouped into three broad categories. First, we construct a series of new ‘weak’ latent variable models that allow us to infer the dimensionality and topology of neural data in an unsupervised manner. We highlight the utility of such approaches on synthetic data and across several biological circuits involved in motor control and navigation. Second, we propose a new method for Bayesian continual learning and relate it to longitudinal recordings of neural activity as a ‘strong’ model of biological learning and memory. Finally, we develop a new ‘strong’ model of planning and decision making through the lens of reinforcement learning formulated as Bayesian inference. In contrast to previous network models, we explicitly build in the capacity for planning-by-simulation and show that this explains many features of both human behaviour and rodent hippocampal replays. This results in a new theory of the role of hippocampus in flexible planning. The new methods developed in this work both expand the Bayesian toolbox available to systems neuroscientists and provide new insights into the neural computations driving natural behaviours.

Description

Date

2023-04-01

Advisors

Hennequin, Guillaume

Keywords

Machine learning, Neuroscience

Qualification

Doctor of Philosophy (PhD)

Awarding Institution

University of Cambridge
Sponsorship
The Gates Cambridge Trust