High-resolution imagery, which features the advantages of high-quality imaging, a short revisit time, and lower costs, is an attractive option for 3D reconstruction applications. Photogrammetric 3D reconstruction requires reliable and dense image matching. In urban areas, however, image matching is particularly difficult because of the complexity of urban textures and the severe occlusion problems caused by buildings. This paper presents an integrated image matching and segmentation approach (named SATM+) for 3D reconstruction in urban areas. SATM+ is based on our existing self-adaptive triangulation-constrained matching (SATM) framework and incorporates three novel aspects to address image matching challenges in urban areas: (1) image segmentation-based occlusion filtering, (2) segment-adaptive similarity measurement to reduce matching ambiguity, and (3) local and regional dense matching propagation to generate reliable and dense matches. We performed an experimental analysis of two sets of high-resolution urban images, and the 3D point clouds generated using the proposed SATM+ were compared with airborne light detection and ranging (lidar) data and the point clouds generated using the semi-global matching (SGM) method. The results indicate that SATM+ can generate 3D point clouds with a geometric accuracy comparable to that of lidar data but a much higher point density. SATM+ performs similarly to SGM in relatively flat areas, but is superior in built-up areas. The proposed approach is a promising option for image-based 3D surface reconstruction in urban areas.
ASJC Scopus subject areas
- Computers in Earth Sciences