TY - GEN
T1 - Multiple-object tracking based on monocular camera and 3-D lidar fusion for autonomous vehicles
AU - Chen, Hao
AU - Xue, Chunyue
AU - Liu, Shoubin
AU - Sun, Yuxiang
AU - Chen, Yongquan
N1 - Funding Information:
ACKNOWLEDGEMENT This paper is partially supported by Shenzhen Fundamental Research grant (JCYJ20180508162406177) and the National Natural Science Foundation of China (U1613216) from The Chinese University of Hong Kong, Shenzhen. This paper is also partially supported by funding from Shenzhen Institute of Artificial Intelligence and Robotics for Society.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/12
Y1 - 2019/12
N2 - This article describes a multi-object tracking method through sensor fusion with a monocular camera and a 3-D Lidar for autonomous vehicles. Specifically, several pairwise costs from information, such as locations, movements, and poses of 3-D cues, are designed for tracking. These costs can complement each other to reduce matching errors during the tracking process. Moreover, they are efficient to be on-line computed with embedded equipment. We feed the pairwise costs to the data-association framework, which is based on the Hungarian algorithm, and then do the back-end fusion for the tracking results. The experimental results on our autonomous sightseeing car demonstrate that our tracking method could achieve accurate and robust results in real-world traffic scenarios.
AB - This article describes a multi-object tracking method through sensor fusion with a monocular camera and a 3-D Lidar for autonomous vehicles. Specifically, several pairwise costs from information, such as locations, movements, and poses of 3-D cues, are designed for tracking. These costs can complement each other to reduce matching errors during the tracking process. Moreover, they are efficient to be on-line computed with embedded equipment. We feed the pairwise costs to the data-association framework, which is based on the Hungarian algorithm, and then do the back-end fusion for the tracking results. The experimental results on our autonomous sightseeing car demonstrate that our tracking method could achieve accurate and robust results in real-world traffic scenarios.
UR - http://www.scopus.com/inward/record.url?scp=85079031427&partnerID=8YFLogxK
U2 - 10.1109/ROBIO49542.2019.8961438
DO - 10.1109/ROBIO49542.2019.8961438
M3 - Conference article published in proceeding or book
AN - SCOPUS:85079031427
T3 - IEEE International Conference on Robotics and Biomimetics, ROBIO 2019
SP - 456
EP - 460
BT - IEEE International Conference on Robotics and Biomimetics, ROBIO 2019
PB - IEEE
T2 - 2019 IEEE International Conference on Robotics and Biomimetics, ROBIO 2019
Y2 - 6 December 2019 through 8 December 2019
ER -