Searching Efficient 3D Architectures
with Sparse Point-Voxel Convolution

Haotian Tang* 1 , Zhijian Liu* 1 , Shengyu Zhao 1,2 , Yujun Lin 1 , Ji Lin 1 , Hanrui Wang 1 , Song Han 1
1Massachusetts Institute of Technology, 2IIIS, Tsinghua University
(* indicates equal contributions)

Note: We provide Open In Colab for quick exploration!

Self-driving cars need to understand 3D scenes efficiently and accurately in order to drive safely. Given the limited hardware resources, existing 3D perception models are not able to recognize small instances (e.g., pedestrians, cyclists) very well due to the low-resolution voxelization and aggressive downsampling. To this end, we propose Sparse Point-Voxel Convolution (SPVConv), a lightweight 3D module that equips the vanilla Sparse Convolution with the high-resolution point-based branch. With negligible overhead, this point-based branch is able to preserve the fine details even from large outdoor scenes. To explore the spectrum of efficient 3D models, we first define a flexible architecture design space based on SPVConv, and we then present 3D Neural Architecture Search (3D-NAS) to search the optimal network architecture over this diverse design space efficiently and effectively. Experimental results validate that the resulting SPVNAS model is fast and accurate: it outperforms the state-of-the-art MinkowskiNet by 3.3%, ranking 1st on the competitive SemanticKITTI leaderboard. It also achieves 8x computation reduction and 3x measured speedup over MinkowskiNet with higher accuracy. Finally, we transfer our method to 3D object detection, and it achieves consistent improvements over the one-stage detection baseline on KITTI.

Efficient 3D Module Design: Sparse Point-Voxel Convolution

SPVConv uses a specialized, high-resolution point-based branch to model fine details in large-scale outdoor scenes.

Efficient Model Design: 3D Neural Architecture Search

Efficient Sparse Computation Library: torchsparse

torchsparse is an efficient 3D sparse computation library, which significantly boosts the speed of existing state-of-the-art implementation, MinkowskiEngine.

Results on SemanticKITTI

Both a better 3D module (SPVConv) and efficient 3D AutoML (3D-NAS) greatly improves the efficiency-accuracy tradeoff of MinkowskiNet.

SPVNAS outperforms MinkowskiNet with 7.6x smaller computation and 2.7x faster measured latency.

SPVNAS achieves a throughput of 9.1 FPS on real autonomous driving scenes.

Citation

 @inproceedings{tang2020searching,
    title     = {Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution},
    author    = {Tang, Haotian* and Liu, Zhijian* and Zhao, Shengyu and Lin, Yujun and Lin, Ji and Wang, Hanrui and Han, Song},
    booktitle = {European Conference on Computer Vision},
    year      = {2020}
 } 

Acknowledgments: We thank MIT Quest for Intelligence, MIT-IBM Watson AI Lab, Xilinx and Samsung for supporting this research. We also thank AWS Machine Learning Research Awards for providing the computational resource.