Repository logo
 

Understanding Real World Indoor Scenes With Synthetic Data

Accepted version
Peer-reviewed

Loading...
Thumbnail Image

Change log

Abstract

Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments. Recent attempts with supervised learning have shown promise in this direction but also high- lighted the need for enormous quantity of supervised data — performance increases in proportion to the amount of data used. However, this quickly becomes prohibitive when con- sidering the manual labour needed to collect such data. In this work, we focus our attention on depth based semantic per-pixel labelling as a scene understanding problem and show the potential of computer graphics to generate virtu- ally unlimited labelled data from synthetic 3D scenes. By carefully synthesizing training data with appropriate noise models we show comparable performance to state-of-the- art RGBD systems on NYUv2 dataset despite using only depth data as input and set a benchmark on depth-based segmentation on SUN RGB-D dataset.

Description

Keywords

Journal Title

Conference Name

IEEE Conference on Computer Vision and Pattern Recognition

Journal ISSN

Volume Title

Publisher

Rights and licensing

Except where otherwised noted, this item's license is described as http://www.rioxx.net/licenses/all-rights-reserved