TY - JOUR
T1 - PointMoSeg: Sparse Tensor-Based End-to-End Moving-Obstacle Segmentation in 3-D Lidar Point Clouds for Autonomous Driving
AU - Sun, Yuxiang
AU - Zuo, Weixun
AU - Huang, Huaiyang
AU - Cai, Peide
AU - Liu, Ming
N1 - Funding Information:
Manuscript received August 25, 2020; accepted December 6, 2020. Date of publication December 28, 2020; date of current version January 12, 2021. This letter was recommended for publication by Associate Editor S. Lee and Editor Y. Choi upon evaluation of the Reviewers’ comments. This work was supported in part by the Young Scientists Fund of the National Natural Science Foundation of China under Grant 62003286, and in part by the Start-up Fund of The Hong Kong Polytechnic University under Grant P0034801. (Corresponding author: Yuxiang Sun.) Yuxiang Sun is with the Department of Mechanical Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong (e-mail: [email protected], [email protected]).
Publisher Copyright:
© 2016 IEEE.
PY - 2021/4
Y1 - 2021/4
N2 - Moving-obstacle segmentation is an essential capability for autonomous driving. For example, it can serve as a fundamental component for motion planning in dynamic traffic environments. Most of the current 3-D Lidar-based methods use road segmentation to find obstacles, and then employ ego-motion compensation to distinguish the static or moving states of the obstacles. However, when there is a slope on a road, the widely-used flat-road assumption for road segmentation may be violated. Moreover, due to the signal attenuation, GPS-based ego-motion compensation is often unreliable in urban environments. To provide a solution to these issues, this letter proposes an end-to-end sparse tensor-based deep neural network for moving-obstacle segmentation without using GPS or the planar-road assumption. The input to our network are merely two consecutive (previous and current) point clouds, and the output is directly the point-wise mask for moving obstacles on the current frame. We train and evaluate our network on the public nuScenes dataset. The experimental results confirm the effectiveness of our network and the superiority over the baselines.
AB - Moving-obstacle segmentation is an essential capability for autonomous driving. For example, it can serve as a fundamental component for motion planning in dynamic traffic environments. Most of the current 3-D Lidar-based methods use road segmentation to find obstacles, and then employ ego-motion compensation to distinguish the static or moving states of the obstacles. However, when there is a slope on a road, the widely-used flat-road assumption for road segmentation may be violated. Moreover, due to the signal attenuation, GPS-based ego-motion compensation is often unreliable in urban environments. To provide a solution to these issues, this letter proposes an end-to-end sparse tensor-based deep neural network for moving-obstacle segmentation without using GPS or the planar-road assumption. The input to our network are merely two consecutive (previous and current) point clouds, and the output is directly the point-wise mask for moving obstacles on the current frame. We train and evaluate our network on the public nuScenes dataset. The experimental results confirm the effectiveness of our network and the superiority over the baselines.
KW - 3-D Lidar
KW - autonomous driving
KW - end-to-end
KW - moving obstacle
KW - point cloud
KW - sparse tensor
UR - http://www.scopus.com/inward/record.url?scp=85099096399&partnerID=8YFLogxK
U2 - 10.1109/LRA.2020.3047783
DO - 10.1109/LRA.2020.3047783
M3 - Journal article
AN - SCOPUS:85099096399
SN - 2377-3766
VL - 6
SP - 510
EP - 517
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 2
M1 - 9309360
ER -