Point-Voxel CNN for Efficient 3D Deep Learning

Zhijian Liu1,*, Haotian Tang2,*, Yujun Lin1, Song Han1,
1Massachusetts Institute of Technology, 2Shanghai Jiao Tong University

We present Point-Voxel CNN (PVCNN) for efficient, fast 3D deep learning. Previous work processes 3D data using either voxel-based or point-based NN models. However, both approaches are computationally inefficient. The computation cost and memory footprints of the voxel-based models grow cubically with the input resolution, making it memory-prohibitive to scale up the resolution. As for point-based networks, up to 80% of the time is wasted on structuring the sparse data which have rather poor memory locality, not on the actual feature extraction. In this paper, we propose PVCNN that represents the 3D input data in points to reduce the memory consumption, while performing the convolutions in voxels to reduce the irregular, sparse data access and improve the locality. Our PVCNN model is both memory and computation efficient. Evaluated on semantic and part segmentation datasets, it achieves much higher accuracy than the voxel-based baseline with 10x GPU memory reduction; it also outperforms the state-of-the-art point-based models with 7x measured speedup on average. Remarkably, the narrower version of PVCNN achieves 2x speedup over PointNet (an extremely efficient model) on part and scene segmentation benchmarks with much higher accuracy. We validate the general effectiveness of PVCNN on 3D object detection: by replacing the primitives in Frustrum PointNet with PVConv, it outperforms Frustrum PointNet++ by 2.4% mAP on average with 1.5x measured speedup and GPU memory reduction.

Point-Voxel Convolution

Our paper presents a hardware-efficient primitive for 3D deep learning:

  1. Its point-based branch keeps the input in a high resolution.
  2. Its voxel-based branch conducts the convolution over lower-resolution voxels to extract the neighborhood information.

Results on S3DIS

Introduction Video

MIT Driverless


  title={Point-Voxel CNN for Efficient 3D Deep Learning},
  author={Liu, Zhijian and Tang, Haotian and Lin, Yujun and Han, Song},
  booktitle={Annual Conference on Neural Information Processing Systems (NeurIPS)},

Acknowledgments: We sincerely thank MIT Quest for Intelligence, MIT-IBM Watson AI Lab, Samsung, Facebook and SONY for supporting this research. We also thank AWS Machine Learning Research Awards for providing the computation resource.