ENHANCED RGB-D MAPPING METHOD for DETAILED 3D MODELING of LARGE INDOOR ENVIRONMENTS

Shengjun Tang, Qing Zhu, Wu Chen, Walid Darwish, Bo Wu, Han Hu, Min Chen

Research output: Journal article publicationConference articleAcademic researchpeer-review

Abstract

RGB-D sensors are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks with respect to 3D dense mapping of indoor environments. First, they only allow a measurement range with a limited distance (e.g., within 3m) and a limited field of view. Second, the error of the depth measurement increases with increasing distance to the sensor. In this paper, we propose an enhanced RGB-D mapping method for detailed 3D modeling of large indoor environments by combining RGB image-based modeling and depth-based modeling. The scale ambiguity problem during the pose estimation with RGB image sequences can be resolved by integrating the information from the depth and visual information provided by the proposed system. A robust rigid-transformation recovery method is developed to register the RGB image-based and depth-based 3D models together. The proposed method is examined with two datasets collected in indoor environments for which the experimental results demonstrate the feasibility and robustness of the proposed method.
Original languageEnglish
Pages (from-to)151-158
Number of pages8
JournalISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Volume3
DOIs
Publication statusPublished - 1 Jun 2016
Event23rd International Society for Photogrammetry and Remote Sensing Congress, ISPRS 2016 - Prague, Czech Republic
Duration: 12 Jul 201619 Jul 2016

Keywords

  • Camera Pose
  • Depth
  • Image
  • Indoor Modeling
  • Registration
  • RGB-D Camera

ASJC Scopus subject areas

  • Earth and Planetary Sciences (miscellaneous)
  • Environmental Science (miscellaneous)
  • Instrumentation

Cite this