Show simple item record

dc.contributor.authorHanda, Ankuren
dc.contributor.authorPatraucean, Vioricaen
dc.contributor.authorBadrinarayanan, Vijayen
dc.contributor.authorStent, Simonen
dc.contributor.authorCipolla, Robertoen
dc.date.accessioned2018-09-05T11:05:20Z
dc.date.available2018-09-05T11:05:20Z
dc.identifier.urihttps://www.repository.cam.ac.uk/handle/1810/279106
dc.description.abstractScene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments. Recent attempts with supervised learning have shown promise in this direction but also high- lighted the need for enormous quantity of supervised data — performance increases in proportion to the amount of data used. However, this quickly becomes prohibitive when con- sidering the manual labour needed to collect such data. In this work, we focus our attention on depth based semantic per-pixel labelling as a scene understanding problem and show the potential of computer graphics to generate virtu- ally unlimited labelled data from synthetic 3D scenes. By carefully synthesizing training data with appropriate noise models we show comparable performance to state-of-the- art RGBD systems on NYUv2 dataset despite using only depth data as input and set a benchmark on depth-based segmentation on SUN RGB-D dataset.
dc.subjectcs.CVen
dc.subjectcs.CVen
dc.titleSceneNet: Understanding Real World Indoor Scenes With Synthetic Dataen
dc.typeConference Object
dc.identifier.doi10.17863/CAM.26486
dcterms.dateAccepted2016-02-29en
rioxxterms.licenseref.urihttp://www.rioxx.net/licenses/all-rights-reserveden
rioxxterms.licenseref.startdate2016-02-29en
dc.contributor.orcidCipolla, Roberto [0000-0002-8999-2151]
rioxxterms.typeConference Paper/Proceeding/Abstracten
rioxxterms.freetoread.startdate2019-08-29


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record