Repository logo

Computational Principles of Brain Network Development



Change log



Brain development can be viewed through many lenses and studied at many scales. However, multiple theoretical perspectives have argued that brain organisation develops via competitive interactions between its constituent units, dynamically over time. In this thesis, I focus on modelling these interactions.

In Chapter 1, after providing an historical backdrop to the field of developmental systems neuroscience, I introduce generative network models. These relatively new family of models are capable of simulating probabilistic network development. The basis for these models includes simple sets of wiring rules, existing within various imposed biophysical constraints, which steer the developmental trajectory of the network.

In Chapter 2, I show how the applications of these models can reveal simple principles that may contribute to our understanding of neurodiversity. In particular, small iterative updates in networks can lead to constrained variability in a child’s macroscopic structural brain organisation inferred via in vivo diffusion imaging. I highlight how decompositions of networks into the generative components used to construct them in this way can be useful lowerdimensional representations of developmental ingredients. This is particularly relevant when aiming to bridge associations between genomics, cognition and the brain for answering developmental questions.

Generative network models emphasise the evolving economic context of dynamic interactive negotiations between brain regions. These regions can be defined at any scale. In Chapter 3 I pivot from studying cross-sectional macroscopic connectomes, to modelling the microstructural longitudinal development of in vitro neuronal networks at the cellular scale. I show that current instantiations of a homophily generative model are an effective growth model of in vitro neuronal network development. This simple model can recapitulate observable local topological organisation of functional networks across species, time, plating densities, cell-types and experimental conditions. Together, Chapter 2 and 3 can be considered as a test of whether generative network models can simulate biological brain topologies, in an unsupervised fashion, according to intrinsic wiring rules.

The nervous system has evolved, in part, to sustain and ensure survival of the organism. Therefore, the structural organisation of the brain must be considered with respect to how it directly supports function in order to achieve behavioural goals. However, many current frameworks posit only associations between neural structure and function, rather than direct bidirectional influences. In Chapter 4, I aim to model how previously aforementioned economic negotiations may facilitate direct structure-function interactions. I introduce an extension of artificial neural networks for which I term spatially-embedded recurrent neural networks (seRNNs). seRNNs add simple biophysical constraints into the model within a regularisation term, that change the nature of how connections change during optimisation. I show that adding local spatial and communication constraints to this neural network points towards a convergent solution whereby optimal functional trade-offs are attained where sparsity, homophily generative mechanisms, small-worldness, functional configuration in space and energetic efficiency together coalesce.

In Chapter 5 I summarise key take-aways and provide what I believe to be promising future avenues for the applications of computational modelling to developmental systems neuroscience.





Astle, Duncan


Brain development, Computational neuroscience, Developmental systems neuroscience, Generative models, Multi-scale connectomics, Network neuroscience, Spatially-embedded RNNs


Doctor of Philosophy (PhD)

Awarding Institution

University of Cambridge
Medical Research Council Doctoral Training Programme; Cambridge Trust Vice Chancellor’s Award Scholarship