Repository logo
 

The computations of neural circuits underlying flexible cognition and how they come to be


Loading...
Thumbnail Image

Type

Change log

Authors

Abstract

In a world full of new impressions, well-known sights and challenging problems, brains handle information and decide on the next action in a goal directed and flexible manner. Presented with new situations, they rapidly construct the best new program to execute, given past knowledge, new information, their goals, and environmental constraints. To do so efficiently, the brain’s neural circuits and inner computations need to be tightly intertwined, forming a highly optimised entity. This thesis is going to highlight the circuit dynamics underlying the brain’s flexible problem solving in primate prefrontal cortex and how the structure and function of neural circuits can jointly be optimised to achieve efficient information processing. To do so, chapter 1 is reviewing the history of intelligent systems. Starting with an overview of mechanisms of flexible cognition in biological systems, I am going to highlight how the development of intelligent artificial systems has transformed our ideas and questions about how flexible cognition can come to be. Chapter 2 and 3 are focused on respectively analysing data recorded from primate prefrontal cortex during a rapid learning and a multi-step inference task. By analysing the circuit dynamics of multiple regions of the frontal cortex, I am characterising how flexible cognition is achieved by a set of stable and flexible codes that are taking distinct roles of information binding, retrieval, and sustenance. Chapter 4 is studying how computations we typically observe in neural circuits of the frontal cortex are intertwined with the structure of the circuits themselves. Through introducing spatially embedded recurrent neural networks as a new theoretical model, I show how various structural and functional features observed across brains can result from the brain’s co-optimisation of structure and function to optimise performance while minimising metabolic costs. Specifically, the new model reveals potential optimisation mechanisms which jointly shape how the brain’s structure and function develops. Chapter 5 is then applying the principles of co-optimisation observed in the spatially embedded recurrent neural networks to the context of silicon-based computational circuits to review whether similar mechanisms can also improve the algorithm-hardware co-optimisation of artificial neural networks and their underlying silicon-based hardware. Chapter 6 finishes by reviewing the new work presented in this thesis to discuss how the brain’s circuitry and computations are intertwined to achieve the best task performance while also optimising its energy expenditure. It reviews how the brain can achieve complex optimisation goals in a robust fashion through using multiplexed feature sets. Lastly, it concludes by outlining how the studies of intelligent systems are continuing to undergo rapid changes through technological advancements and how we can hope to gain a deep understanding of information processing by jointly understanding biological and artificial systems.

Description

Date

2024-03-15

Advisors

Duncan, John

Qualification

Doctor of Philosophy (PhD)

Awarding Institution

University of Cambridge

Rights and licensing

Except where otherwised noted, this item's license is described as Attribution 4.0 International (CC BY 4.0)
Sponsorship
MRC (MC_UU_00030/7)
Templeton World Charity Foundation (TWCF) (TWCF-2022-30510)
Gates Cambridge Scholarship, Google DeepMind, Intel Labs