Why-So-Deep: Towards Boosting Previously Trained Models for Visual Place Recognition

M. Usman Maqbool Bhutta, Yuxiang Sun, Darwin Lau, Ming Liu

Research output: Journal article publicationJournal articleAcademic researchpeer-review

5 Citations (Scopus)

Abstract

Deep learning-based image retrieval techniques for the loop closure detection demonstrate satisfactory performance. However, it is still challenging to achieve high-level performance based on previously trained models in different geographical regions. This letter addresses the problem of their deployment with simultaneous localization and mapping (SLAM) systems in the new environment. The general baseline approach uses additional information, such as GPS, sequential keyframes tracking, and re-training the whole environment to enhance the recall rate. We propose a novel approach for improving image retrieval based on previously trained models. We present an intelligent method, MAQBOOL, to amplify the power of pre-trained models for better image recall and its application to real-time multiagent SLAM systems. We achieve comparable image retrieval results at a low descriptor dimension (512-D), compared to the high descriptor dimension (4096-D) of state-of-the-art methods. We use spatial information to improve the recall rate in image retrieval on pre-trained models. Material related to this work is available at https://usmanmaqbool.github.io/why-so-deep.

Original languageEnglish
Pages (from-to)1824-1831
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume7
Issue number2
DOIs
Publication statusPublished - 1 Apr 2022

Keywords

  • Localization
  • Recognition
  • Visual learning

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Biomedical Engineering
  • Human-Computer Interaction
  • Mechanical Engineering
  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Control and Optimization
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Why-So-Deep: Towards Boosting Previously Trained Models for Visual Place Recognition'. Together they form a unique fingerprint.

Cite this