Show simple item record

dc.contributor.authorMcClure, Patrick
dc.date.accessioned2018-07-13T08:26:35Z
dc.date.available2018-07-13T08:26:35Z
dc.date.issued2018-07-20
dc.date.submitted2017-08-01
dc.identifier.urihttps://www.repository.cam.ac.uk/handle/1810/278073
dc.description.abstractDeep neural networks (DNNs) have recently been used to solve complex perceptual and decision tasks. In particular, convolutional neural networks (CNN) have been extremely successful for visual perception. In addition to performing well on the trained object recognition task, these CNNs also model brain data throughout the visual hierarchy better than previous models. However, these DNNs are still far from completely explaining visual perception in the human brain. In this thesis, we investigated two methods with the goal of improving DNNs’ capabilities to model human visual perception: (1) deep representational distance learning (RDL), a method for driving representational spaces in deep nets into alignment with other (e.g. brain) representational spaces and (2) variational DNNs that use sampling to perform approximate Bayesian inference. In the first investigation, RDL successfully transferred information from a teacher model to a student DNN. This was achieved by driving the student DNN’s representational distance matrix (RDM), which characterises the representational geometry, into alignment with that of the teacher. This led to a significant increase in test accuracy on machine learning benchmarks. In the future, we plan to use this method to simultaneously train DNNs to perform complex tasks and to predict neural data. In the second investigation, we showed that sampling during learning and inference using simple Bernoulli- and Gaussian-based noise improved a CNN’s representation of its own uncertainty for object recognition. We also found that sampling during learning and inference with Gaussian noise improved how well CNNs predict human behavioural data for image classification. While these methods alone do not fully explain human vision, they allow for training CNNs that better model several features of human visual perception.
dc.language.isoen
dc.rightsAll rights reserved
dc.rightsAll Rights Reserveden
dc.rights.urihttps://www.rioxx.net/licenses/all-rights-reserved/en
dc.subjectdeep neural networks
dc.subjectvisual perception
dc.subjectbayesian neural networks
dc.subjecttransfer learning
dc.subjectcomputational neuroscience
dc.subjectvariational inference
dc.titleAdapting deep neural networks as models of human visual perception
dc.typeThesis
dc.type.qualificationlevelDoctoral
dc.type.qualificationnameDoctor of Philosophy (PhD)
dc.publisher.institutionUniversity of Cambridge
dc.publisher.departmentMRC Cognition and Brain Sciences Unit
dc.date.updated2018-07-12T18:35:14Z
dc.identifier.doi10.17863/CAM.25412
dc.type.qualificationtitlePhD in Biological Sciences
cam.supervisorKriegeskorte, Nikolaus
cam.thesis.fundingfalse
rioxxterms.freetoread.startdate2018-07-12


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record