Three-dimensional (3D) point cloud segmentation plays an important role in autonomous navigation systems, such as mobile robots and autonomous cars. However, the segmentation is challenging because of data sparsity, uneven sampling density, irregular format, and lack of color texture. In this paper, we propose a sparse 3D point cloud segmentation method based on 2D image feature extraction with deep learning. Firstly, we jointly calibrate the camera and lidar to get the external parameters (rotation matrix and translation vector). Then, we introduce the Convolutional Neural Network (CNN)-based object detectors to generate 2D object region proposals in the RGB image and classify object. Finally, based on the external parameters of joint calibration, we extract point clouds that can be projected to 2D object region from 16-lines RS-LIDAR-16 scanner, and further fine segmentation in the extracted point cloud according to prior knowledge of the classification features. Experiments demonstrate the effectiveness of the proposed sparse point cloud segmentation method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.