Show simple item record

dc.contributor.authorChitturi, Sathya Ren
dc.contributor.authorVerpoort, Philippen
dc.contributor.authorLee, Alpha Aen
dc.contributor.authorWales, Daviden
dc.date.accessioned2020-02-04T00:31:06Z
dc.date.available2020-02-04T00:31:06Z
dc.identifier.issn2632-2153
dc.identifier.urihttps://www.repository.cam.ac.uk/handle/1810/301682
dc.description.abstractWe investigate the structure of the loss function landscape for neural networks subject to dataset mislabelling, increased training set diversity, and reduced node connectivity, using various techniques developed for energy landscape exploration. The benchmarking models are classification problems for atomic geometry optimisation and hand-written digit prediction. We consider the effect of varying the size of the atomic configuration space used to generate initial geometries and find that the number of stationary points increases rapidly with the size of the training configuration space. We introduce a measure of node locality to limit network connectivity and perturb permutational weight symmetry, and examine how this parameter affects the resulting landscapes. We find that highly-reduced systems have low capacity and exhibit landscapes with very few minima. On the other hand, small amounts of reduced-connectivity can enhance network expressibility and can yield more complex landscapes. Investigating the effect of deliberate classification errors in the training data, we find that the variance in testing AUC, computed over a sample of minima, grows significantly with the training error, providing new insight into the role of the variance-bias trade-off when training under noise. Finally, we illustrate how the number of local minima for networks with two and three hidden layers, but a comparable number of variable edge weights, increases significantly with the number of layers, and as the number of training data decreases. This work helps shed further light on neural network loss landscapes and provides guidance for future work on neural network training and optimisation.
dc.description.sponsorshipepsrc
dc.rightsAll rights reserved
dc.rights.uri
dc.titlePerspective: new insights from loss function landscapes of neural networksen
dc.typeArticle
prism.endingPage023002
prism.issueIdentifier2en
prism.publicationNameMachine Learning: Science and Technologyen
prism.startingPage023002
prism.volume1en
dc.identifier.doi10.17863/CAM.48753
dcterms.dateAccepted2020-01-31en
rioxxterms.versionofrecord10.1088/2632-2153/ab7aefen
rioxxterms.versionAM
rioxxterms.licenseref.urihttp://www.rioxx.net/licenses/all-rights-reserveden
rioxxterms.licenseref.startdate2020-01-31en
dc.contributor.orcidVerpoort, Philipp [0000-0003-1319-5006]
dc.contributor.orcidWales, David [0000-0002-3555-6645]
dc.identifier.eissn2632-2153
rioxxterms.typeJournal Article/Reviewen
pubs.funder-project-idEPSRC (EP/N035003/1)
cam.issuedOnline2020-04-09en
cam.orpheus.counter82*
rioxxterms.freetoread.startdate2023-02-03


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record