TY - GEN
T1 - Monocular Visual Odometry using Learned Repeatability and Description
AU - Huang, Huaiyang
AU - Ye, Haoyang
AU - Sun, Yuxiang
AU - Liu, Ming
N1 - Funding Information:
This work was supported by the National Natural Science Foundation of China, under grant No. U1713211, and the Shenzhen Science, Technology and Innovation Commission (SZSTI) under grant JCYJ20160428154842603, the Research Grant Council of Hong Kong SAR Government, China, under Project No. 11210017, No. 21202816, awarded to Prof. Ming Liu.
Publisher Copyright:
© 2020 IEEE.
PY - 2020/5
Y1 - 2020/5
N2 - Robustness and accuracy for monocular visual odometry (VO) under challenging environments are widely concerned. In this paper, we present a monocular VO system leveraging learned repeatability and description. In a hybrid scheme, the camera pose is initially tracked on the predicted repeatability maps in a direct manner and then refined with the patch-wise 3D-2D association. The local feature parameterization and the adapted mapping module further boost different functionalities in the system. Extensive evaluations on challenging public datasets are performed. The competitive performance on camera pose estimation demonstrates the effectiveness of our method. Additional studies on the local reconstruction accuracy and running time exhibit that our system is capable of maintaining a robust and lightweight backend.
AB - Robustness and accuracy for monocular visual odometry (VO) under challenging environments are widely concerned. In this paper, we present a monocular VO system leveraging learned repeatability and description. In a hybrid scheme, the camera pose is initially tracked on the predicted repeatability maps in a direct manner and then refined with the patch-wise 3D-2D association. The local feature parameterization and the adapted mapping module further boost different functionalities in the system. Extensive evaluations on challenging public datasets are performed. The competitive performance on camera pose estimation demonstrates the effectiveness of our method. Additional studies on the local reconstruction accuracy and running time exhibit that our system is capable of maintaining a robust and lightweight backend.
UR - http://www.scopus.com/inward/record.url?scp=85092715279&partnerID=8YFLogxK
U2 - 10.1109/ICRA40945.2020.9197406
DO - 10.1109/ICRA40945.2020.9197406
M3 - Conference article published in proceeding or book
AN - SCOPUS:85092715279
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 8913
EP - 8919
BT - 2020 IEEE International Conference on Robotics and Automation, ICRA 2020
PB - IEEE
T2 - 2020 IEEE International Conference on Robotics and Automation, ICRA 2020
Y2 - 31 May 2020 through 31 August 2020
ER -