State-of-the-art visual simultaneous localization and mapping (SLAM) techniques greatly facilitate three-dimensional (3D) mapping and modeling with the use of low-cost red-green-blue-depth (RGB-D) sensors. However, the effective range of such sensors is limited due to the working range of the infrared (IR) camera, which provides depth information, and thus the practicability of such sensors in 3D mapping and modeling is limited. To address this limitation, we present a novel solution for enhanced 3D mapping using a low-cost RGB-D sensor. We carry out state-of-the-art visual SLAM to obtain 3D point clouds within the mapping range of the RGB-D sensor and implement an improved structure-from-motion (SfM) on the collected RGB image sequences with additional constraints from the depth information to produce image-based 3D point clouds. We then develop a feature-based scale-adaptive registration to merge the gained point clouds to further generate enhanced and extended 3D mapping results. We use two challenging test sites to examine the proposed method. At these two sites, the coverage of both generated 3D models increases by more than 50% with the proposed solution. Moreover, the proposed solution achieves a geometric accuracy of about 1% in a measurement range of about 20 m. These positiveexperimental results not only demonstrate thefeasibilityandpracticality of the proposed solutionbut alsoits potential.
ASJC Scopus subject areas
- Computers in Earth Sciences