Show simple item record

dc.contributor.authorGales, Marken
dc.contributor.authorMalinin, Andreyen
dc.contributor.authorMlodozeniec, Brunoen
dc.date.accessioned2020-02-17T13:42:28Z
dc.date.available2020-02-17T13:42:28Z
dc.identifier.urihttps://www.repository.cam.ac.uk/handle/1810/302275
dc.description.abstractEnsembles of models often yield improvements in system performance. These ensemble approaches have also been empirically shown to yield robust measures of uncertainty, and are capable of distinguishing between different forms of un- certainty. However, ensembles come at a computational and memory cost which may be prohibitive for many applications. There has been significant work done on the distillation of an ensemble into a single model. Such approaches decrease computational cost and allow a single model to achieve an accuracy comparable to that of an ensemble. However, information about the diversity of the ensemble, which can yield estimates of different forms of uncertainty, is lost. This work considers the novel task of Ensemble Distribution Distillation (EnD2) — distilling the distribution of the predictions from an ensemble, rather than just the average prediction, into a single model. EnD2 enables a single model to retain both the improved classification performance of ensemble distillation as well as information about the diversity of the ensemble, which is useful for uncertainty estimation. A solution for EnD2 based on Prior Networks, a class of models which allow a single neural network to explicitly model a distribution over output distributions, is proposed in this work. The properties of EnD2 are investigated on both an artificial dataset, and on the CIFAR-10, CIFAR-100 and TinyImageNet datasets, where it is shown that EnD2 can approach the classification performance of an ensemble, and outperforms both standard DNNs and Ensemble Distillation on the tasks of misclassification and out-of-distribution input detection.
dc.description.sponsorshipALTA Institute
dc.titleEnsemble Distribution Distillationen
dc.typeConference Object
dc.identifier.doi10.17863/CAM.49348
dcterms.dateAccepted2019-12-19en
rioxxterms.versionAM*
rioxxterms.licenseref.urihttp://www.rioxx.net/licenses/all-rights-reserveden
rioxxterms.licenseref.startdate2019-12-19en
dc.contributor.orcidGales, Mark [0000-0002-5311-8219]
rioxxterms.typeConference Paper/Proceeding/Abstracten
pubs.funder-project-idCambridge Assessment (Unknown)
pubs.conference-nameInternational Conference on Learning Representationsen
pubs.conference-start-date2020-04-26en
cam.orpheus.counter7*
rioxxterms.freetoread.startdate2100-01-01


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record