TY - JOUR
T1 - PointTrackNet: An End-to-End Network for 3-D Object Detection and Tracking from Point Clouds
AU - Wang, Sukai
AU - Sun, Yuxiang
AU - Liu, Chengju
AU - Liu, Ming
N1 - Funding Information:
Manuscript received September 11, 2019; accepted January 24, 2020. Date of publication February 17, 2020; date of current version March 5, 2020. This letter was recommended for publication by Associate Editor M. J. Kim and Editor Y. Choi upon evaluation of the reviewers’ comments. This work was supported in part by the National Natural Science Foundation of China under Grants U1713211 and 61673300, in part by the Basic Research Project of Shanghai Science and Technology Commission under Grant 18DZ1200804, and in part by HKUST ECE Start-up Grant from HKUST for Heterogeneous Navigation System. (Corresponding author: Ming Liu.) Sukai Wang, Yuxiang Sun, and Ming Liu are with the Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong, China (e-mail: [email protected]; [email protected]; [email protected]).
Publisher Copyright:
© 2016 IEEE.
PY - 2020/4
Y1 - 2020/4
N2 - Recent machine learning-based multi-object tracking (MOT) frameworks are becoming popular for 3-D point clouds. Most traditional tracking approaches use filters (e.g., Kalman filter or particle filter) to predict object locations in a time sequence, however, they are vulnerable to extreme motion conditions, such as sudden braking and turning. In this letter, we propose PointTrackNet, an end-to-end 3-D object detection and tracking network, to generate foreground masks, 3-D bounding boxes, and point-wise tracking association displacements for each detected object. The network merely takes as input two adjacent point-cloud frames. Experimental results on the KITTI tracking dataset show competitive results over the state-of-the-arts, especially in the irregularly and rapidly changing scenarios.
AB - Recent machine learning-based multi-object tracking (MOT) frameworks are becoming popular for 3-D point clouds. Most traditional tracking approaches use filters (e.g., Kalman filter or particle filter) to predict object locations in a time sequence, however, they are vulnerable to extreme motion conditions, such as sudden braking and turning. In this letter, we propose PointTrackNet, an end-to-end 3-D object detection and tracking network, to generate foreground masks, 3-D bounding boxes, and point-wise tracking association displacements for each detected object. The network merely takes as input two adjacent point-cloud frames. Experimental results on the KITTI tracking dataset show competitive results over the state-of-the-arts, especially in the irregularly and rapidly changing scenarios.
KW - autonomous vehicles
KW - end-to-end
KW - multiple-object tracking
KW - Point cloud
UR - http://www.scopus.com/inward/record.url?scp=85081680591&partnerID=8YFLogxK
U2 - 10.1109/LRA.2020.2974392
DO - 10.1109/LRA.2020.2974392
M3 - Journal article
AN - SCOPUS:85081680591
SN - 2377-3766
VL - 5
SP - 3206
EP - 3212
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 2
M1 - 9000527
ER -