Abstract
Moving-obstacle segmentation is an essential capability for autonomous driving. For example, it can serve as a fundamental component for motion planning in dynamic traffic environments. Most of the current 3-D Lidar-based methods use road segmentation to find obstacles, and then employ ego-motion compensation to distinguish the static or moving states of the obstacles. However, when there is a slope on a road, the widely-used flat-road assumption for road segmentation may be violated. Moreover, due to the signal attenuation, GPS-based ego-motion compensation is often unreliable in urban environments. To provide a solution to these issues, this letter proposes an end-to-end sparse tensor-based deep neural network for moving-obstacle segmentation without using GPS or the planar-road assumption. The input to our network are merely two consecutive (previous and current) point clouds, and the output is directly the point-wise mask for moving obstacles on the current frame. We train and evaluate our network on the public nuScenes dataset. The experimental results confirm the effectiveness of our network and the superiority over the baselines.
| Original language | English |
|---|---|
| Article number | 9309360 |
| Pages (from-to) | 510-517 |
| Number of pages | 8 |
| Journal | IEEE Robotics and Automation Letters |
| Volume | 6 |
| Issue number | 2 |
| DOIs | |
| Publication status | Published - Apr 2021 |
Keywords
- 3-D Lidar
- autonomous driving
- end-to-end
- moving obstacle
- point cloud
- sparse tensor
ASJC Scopus subject areas
- Control and Systems Engineering
- Biomedical Engineering
- Human-Computer Interaction
- Mechanical Engineering
- Computer Vision and Pattern Recognition
- Computer Science Applications
- Control and Optimization
- Artificial Intelligence
Fingerprint
Dive into the research topics of 'PointMoSeg: Sparse Tensor-Based End-to-End Moving-Obstacle Segmentation in 3-D Lidar Point Clouds for Autonomous Driving'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver