TY - JOUR
T1 - See the Future: A Semantic Segmentation Network Predicting Ego-Vehicle Trajectory with a Single Monocular Camera
AU - Sun, Yuxiang
AU - Zuo, Weixun
AU - Liu, Ming
N1 - Funding Information:
Manuscript received August 14, 2019; accepted January 23, 2020. Date of publication February 20, 2020; date of current version March 4, 2020. This letter was recommended for publication by Associate Editor I. Manchester and Editor J. Roberts upon evaluation of the reviewers’ comments. This work was supported in part by the National Natural Science Foundation of China Project No. U1713211, in part by the Guangdong-Hong Kong Cooperation Innovation Platform Project No. 2018B050502009, and in part by the Research Grant Council of Hong Kong Government Project No. 11210017 and 21202816. (Corresponding author: Ming Liu.) The authors are with the Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong (e-mail: [email protected], [email protected]; [email protected]; [email protected]). Digital Object Identifier 10.1109/LRA.2020.2975414
Publisher Copyright:
© 2020 IEEE.
PY - 2020/4
Y1 - 2020/4
N2 - Ego-vehicle trajectory prediction is important for autonomous vehicles to detect collisions and accordingly avoid accidents. Recent approaches employ prior-known or on-line acquired road topology or geometries as motion constraints for their predictive models. However, the prior-known information (e.g., pre-built maps) might become unreliable due to, for example, temporal changes caused by road constructions. Whereas on-line perception may require high-cost sensors, such as large filed-of-view laser scanners, to get an overview structure of the local environment, making the prediction difficult to afford, especially for driving assistance systems. So in this letter, we provide a solution without using road topology or geometries for ego-vehicle trajectory prediction. We formulate this problem as a two-class semantic segmentation problem and develop a novel sequence-based deep neural network to predict the trajectory. The only sensor we need during runtime is a single front-view monocular camera. The inputs to our network are several consecutive images, and the output is the predicted trajectory mask that can be directly overlaid on the current front-view image. We create our datasets with different prediction horizons from KITTI. The experimental results confirm the effectiveness of our approach and the superiority over the baselines.
AB - Ego-vehicle trajectory prediction is important for autonomous vehicles to detect collisions and accordingly avoid accidents. Recent approaches employ prior-known or on-line acquired road topology or geometries as motion constraints for their predictive models. However, the prior-known information (e.g., pre-built maps) might become unreliable due to, for example, temporal changes caused by road constructions. Whereas on-line perception may require high-cost sensors, such as large filed-of-view laser scanners, to get an overview structure of the local environment, making the prediction difficult to afford, especially for driving assistance systems. So in this letter, we provide a solution without using road topology or geometries for ego-vehicle trajectory prediction. We formulate this problem as a two-class semantic segmentation problem and develop a novel sequence-based deep neural network to predict the trajectory. The only sensor we need during runtime is a single front-view monocular camera. The inputs to our network are several consecutive images, and the output is the predicted trajectory mask that can be directly overlaid on the current front-view image. We create our datasets with different prediction horizons from KITTI. The experimental results confirm the effectiveness of our approach and the superiority over the baselines.
KW - ADAS
KW - autonomous vehicles
KW - ego-vehicle
KW - semantic segmentation
KW - Trajectory prediction
UR - http://www.scopus.com/inward/record.url?scp=85081579414&partnerID=8YFLogxK
U2 - 10.1109/LRA.2020.2975414
DO - 10.1109/LRA.2020.2975414
M3 - Journal article
AN - SCOPUS:85081579414
SN - 2377-3766
VL - 5
SP - 3066
EP - 3073
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 2
M1 - 9004469
ER -