Repository logo
 

Recurrent neural networks in cognitive and vision neuroscience


Loading...
Thumbnail Image

Type

Change log

Abstract

This thesis investigates the development of novel training methodologies for biologically plausible neural networks, with a focus on models that incorporate recurrent dynamics characteristic of cortical circuits. First, we present an innovative approach for training stabilized supralinear networks, which are models of cortical circuits known to exhibit instabilities due to their recurrent excitatory connections and expansive nonlinearities. Second, we address the challenge of training recurrent neural networks on tasks requiring long-term temporal dependencies, which are critical components of cognitive functions such as working memory and decision-making. By introducing specialized skip-connections to promote the emergence of task-relevant dynamics, we enable these networks to effectively learn such tasks without relying on non-biological mechanisms for memory and temporal integration. Lastly, we propose a hybrid architecture that integrates the continuous-time dynamics of recurrent networks with the spatial processing capabilities of convolutional neural networks, creating a unified model that retains biological plausibility while achieving high performance in complex visual tasks. Together, these contributions advance the training of realistic cortical-like networks, providing new frameworks and insights for modeling intricate neural dynamics and behaviors.

Description

Date

2024-09-30

Advisors

Lengyel, Máté

Keywords

Qualification

Doctor of Philosophy (PhD)

Awarding Institution

University of Cambridge

Rights and licensing

Except where otherwised noted, this item's license is described as Attribution 4.0 International (CC BY 4.0)