PCN: Point Completion Network

Wentao Yuan
Tejas Khot
David Held
Christoph Mertz
Martial Hebert
Robotics Institute, Carnegie Mellon University

[Paper]
[Code]
[Video]


Completion results on ShapeNet objects


Abstract

Shape completion, the problem of estimating the complete geometry of objects from partial observations lies at the core of many vision and robotics applications. In this work, we propose Point Completion Network (PCN), a novel learning-based approach for shape completion. While previous learning-based approaches use voxelized representations such as occupancy grids and distance fields, PCN directly takes a partial point cloud as input and generates a dense, complete point cloud without any voxelization. This enables PCN to produce completions of higher quality with fewer parameters. Without any prior structural assumptions (symmetry, planarity, etc) or additional annotations (part segmentation, class label, etc), our method works on inputs of various levels of incompleteness and is robust against noise. Trained on pairs of partial and complete point clouds from synthetic shapes, our model generalizes to novel incomplete shapes including cars from real LiDAR scans in KITTI, producing dense, complete point clouds with realistic structures in the missing regions.


Paper

W. Yuan, T. Khot, D. Held, C. Mertz, M. Hebert.

PCN: Point Completion Network.

3DV, 2018.

[arXiv]     [Bibtex]



Code and Data

[GitHub]         [Google Drive]



Results on KITTI



Acknowledgements

This project is funded in part by Carnegie Mellon University's Mobility21 National University Transportation Center, which is sponsored by the US Department of Transportation. We would also like to thank Adam Harley, Leonid Keselman and Rui Zhu for their helpful comments and suggestions. The website is modified from this template.