Monocular Visual Odometry using Learned Repeatability and Description

Huaiyang Huang, Haoyang Ye, Yuxiang Sun, Ming Liu

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

2 Citations (Scopus)


Robustness and accuracy for monocular visual odometry (VO) under challenging environments are widely concerned. In this paper, we present a monocular VO system leveraging learned repeatability and description. In a hybrid scheme, the camera pose is initially tracked on the predicted repeatability maps in a direct manner and then refined with the patch-wise 3D-2D association. The local feature parameterization and the adapted mapping module further boost different functionalities in the system. Extensive evaluations on challenging public datasets are performed. The competitive performance on camera pose estimation demonstrates the effectiveness of our method. Additional studies on the local reconstruction accuracy and running time exhibit that our system is capable of maintaining a robust and lightweight backend.

Original languageEnglish
Title of host publication2020 IEEE International Conference on Robotics and Automation, ICRA 2020
Number of pages7
ISBN (Electronic)9781728173955
Publication statusPublished - May 2020
Externally publishedYes
Event2020 IEEE International Conference on Robotics and Automation, ICRA 2020 - Paris, France
Duration: 31 May 202031 Aug 2020

Publication series

NameProceedings - IEEE International Conference on Robotics and Automation
ISSN (Print)1050-4729


Conference2020 IEEE International Conference on Robotics and Automation, ICRA 2020

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Cite this