Repository logo
 

HyperPocket: Generative Point Cloud Completion

Accepted version
Peer-reviewed

Loading...
Thumbnail Image

Change log

Abstract

Scanning real-life scenes with modern registration devices typically give incomplete point cloud representations, mostly due to the limitations of the scanning process and 3D occlusions. Therefore, completing such partial representations remains a fundamental challenge of many computer vision applications. Most of the existing approaches aim to solve this problem by learning to reconstruct individual 3D objects in a synthetic setup of an uncluttered environment, which is far from a real-life scenario. In this work, we reformulate the problem of point cloud completion into an object's hallucination task. Thus, we introduce a novel autoencoder-based architecture called HyperPocket that disentangles latent representations and, as a result, enables the generation of multiple variants of the completed 3D point clouds. Furthermore, we split point cloud processing into two disjoint data streams and leverage a hypernetwork paradigm to fill the spaces, dubbed pockets, that are left by the missing object parts. As a result, the generated point clouds are smooth, plausible, and geometrically consistent with the scene. Moreover, our method offers competitive performances to the other state-of-the-art models, enabling a plethora of novel applications.

Description

Keywords

Journal Title

Conference Name

The 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022)

Journal ISSN

Volume Title

Publisher

Publisher DOI

Publisher URL

Rights and licensing

Except where otherwised noted, this item's license is described as All Rights Reserved
Sponsorship
This research was funded by the Priority Research Area Digiworld under the program Excellence Initiative – Research University at the Jagiellonian University in Kraków.