GMMLoc: Structure Consistent Visual Localization with Gaussian Mixture Models

Huaiyang Huang, Haoyang Ye, Yuxiang Sun, Ming Liu

Research output: Journal article publicationJournal articleAcademic researchpeer-review

25 Citations (Scopus)

Abstract

Incorporating prior structure information into the visual state estimation could generally improve the localization performance. In this letter, we aim to address the paradox between accuracy and efficiency in coupling visual factors with structure constraints. To this end, we present a cross-modality method that tracks a camera in a prior map modelled by the Gaussian Mixture Model (GMM). With the pose estimated by the front-end initially, the local visual observations and map components are associated efficiently, and the visual structure from the triangulation is refined simultaneously. By introducing the hybrid structure factors into the joint optimization, the camera poses are bundle-adjusted with the local visual structure. By evaluating our complete system, namely GMMLoc, on the public dataset, we show how our system can provide a centimeter-level localization accuracy with only trivial computational overhead. In addition, the comparative studies with the state-of-the-art vision-dominant state estimators demonstrate the competitive performance of our method.

Original languageEnglish
Article number9126150
Pages (from-to)5043-5050
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume5
Issue number4
DOIs
Publication statusPublished - Oct 2020
Externally publishedYes

Keywords

  • Localization
  • SLAM
  • visual-based navigation

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Biomedical Engineering
  • Human-Computer Interaction
  • Mechanical Engineering
  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Control and Optimization
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'GMMLoc: Structure Consistent Visual Localization with Gaussian Mixture Models'. Together they form a unique fingerprint.

Cite this