TY - JOUR
T1 - Self-Supervised Drivable Area and Road Anomaly Segmentation Using RGB-D Data for Robotic Wheelchairs
AU - Wang, Hengli
AU - Sun, Yuxiang
AU - Liu, Ming
N1 - Funding Information:
Manuscript received May 23, 2019; accepted July 25, 2019. Date of publication August 2, 2019; date of current version August 15, 2019. This letter was recommended for publication by Associate Editor P. Tokekar and Editor D. Popa upon evaluation of the reviewers’ comments. This work was supported in part by the National Natural Science Foundation of China under Grant U1713211, and in part by the Research Grant Council of Hong Kong SAR Government, China, under Project 11210017 and 21202816. (Corresponding author: Ming Liu.) The authors are with the Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong (e-mail: [email protected]; [email protected], [email protected]; [email protected]). Digital Object Identifier 10.1109/LRA.2019.2932874 Fig. 1. The robotic wheelchair used in this work. It is equipped with an Intel Realsense D415 RGB-D camera to collect data and a Mini PC to process data.
Publisher Copyright:
© 2016 IEEE.
PY - 2019/10
Y1 - 2019/10
N2 - The segmentation of drivable areas and road anomalies are critical capabilities to achieve autonomous navigation for robotic wheelchairs. The recent progress of semantic segmentation using deep learning techniques has presented effective results. However, the acquisition of large-scale datasets with hand-labeled ground truth is time-consuming and labor-intensive, making the deep learning-based methods often hard to implement in practice. We contribute to the solution of this problem for the task of drivable area and road anomaly segmentation by proposing a self-supervised learning approach. We develop a pipeline that can automatically generate segmentation labels for drivable areas and road anomalies. Then, we train RGB-D data-based semantic segmentation neural networks and get predicted labels. Experimental results show that our proposed automatic labeling pipeline achieves an impressive speed-up compared to manual labeling. In addition, our proposed self-supervised approach exhibits more robust and accurate results than the state-of-the-art traditional algorithms as well as the state-of-the-art self-supervised algorithms.
AB - The segmentation of drivable areas and road anomalies are critical capabilities to achieve autonomous navigation for robotic wheelchairs. The recent progress of semantic segmentation using deep learning techniques has presented effective results. However, the acquisition of large-scale datasets with hand-labeled ground truth is time-consuming and labor-intensive, making the deep learning-based methods often hard to implement in practice. We contribute to the solution of this problem for the task of drivable area and road anomaly segmentation by proposing a self-supervised learning approach. We develop a pipeline that can automatically generate segmentation labels for drivable areas and road anomalies. Then, we train RGB-D data-based semantic segmentation neural networks and get predicted labels. Experimental results show that our proposed automatic labeling pipeline achieves an impressive speed-up compared to manual labeling. In addition, our proposed self-supervised approach exhibits more robust and accurate results than the state-of-the-art traditional algorithms as well as the state-of-the-art self-supervised algorithms.
KW - deep learning in robotics and automation
KW - RGB-D perception
KW - Semantic scene understanding
UR - http://www.scopus.com/inward/record.url?scp=85071502158&partnerID=8YFLogxK
U2 - 10.1109/LRA.2019.2932874
DO - 10.1109/LRA.2019.2932874
M3 - Journal article
AN - SCOPUS:85071502158
SN - 2377-3766
VL - 4
SP - 4386
EP - 4393
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 4
M1 - 8786197
ER -