TY - JOUR
T1 - Fusion of aerial, MMS and backpack images and point clouds for optimized 3D mapping in urban areas
AU - Li, Zhaojin
AU - Wu, Bo
AU - Li, Yuan
AU - Chen, Zeyu
N1 - Funding Information:
This work was supported by grants from the Hong Kong Polytechnic University (Project ID P0046112), the Research Grants Council of Hong Kong (Project No: PolyU 15210520, Project No: PolyU 15219821, Project No: 15215822), and the National Natural Science Foundation of China (Project No. 42201476). The authors would also like to thank the Survey and Mapping Office of the Lands Department of the HKSAR government for providing the experimental datasets.
Publisher Copyright:
© 2023 The Author(s)
PY - 2023/8
Y1 - 2023/8
N2 - Photorealistic 3D models are important data sources for digital twin cities and smart city applications. These models are usually generated from data collected by aerial or ground-based platforms (e.g., mobile mapping systems (MMSs) and backpack systems) separately. Aerial and ground-based platforms capture data from overhead and ground surfaces, respectively, offering complementary information for better 3D mapping in urban areas. Particularly, backpack mapping systems have gained popularity for 3D mapping in urban areas in recent years, as they offer more flexibility to reach regions (e.g., narrow alleys and pedestrian routes) inaccessible by vehicle-based MMSs. However, integration of aerial and ground data for 3D mapping suffers from difficulties such as tie-point matching among images from different platforms with large differences in perspective, coverage, and scale. Optimal fusion of the results from different platforms is also challenging. Therefore, this paper presents a novel method for the fusion of aerial, MMS, and backpack images and point clouds for optimized 3D mapping in urban areas. A geometric-aware model for feature matching is developed based on the SuperGlue algorithm to obtain sufficient tie-points between aerial and ground images, which facilitates the integrated bundle adjustment of images to reduce their geometric inconsistencies and the subsequent dense image matching to generate 3D point clouds from different image sources. After that, a graph-based method considering both geometric and texture traits is developed for the optimal fusion of point clouds from different sources to generate 3D mesh models of better quality. Experiments conducted on a challenging dataset in Hong Kong demonstrated that the geometric-aware model could obtain sufficient accurately matched tie-points among the aerial, MMS, and backpack images, which enabled the integrated bundle adjustment of the three image datasets to generate properly aligned point clouds. Compared with the results obtained from state-of-the-art commercial software, the 3D mesh models generated from the proposed point cloud fusion method exhibited better quality in terms of completeness, consistency, and level of detail.
AB - Photorealistic 3D models are important data sources for digital twin cities and smart city applications. These models are usually generated from data collected by aerial or ground-based platforms (e.g., mobile mapping systems (MMSs) and backpack systems) separately. Aerial and ground-based platforms capture data from overhead and ground surfaces, respectively, offering complementary information for better 3D mapping in urban areas. Particularly, backpack mapping systems have gained popularity for 3D mapping in urban areas in recent years, as they offer more flexibility to reach regions (e.g., narrow alleys and pedestrian routes) inaccessible by vehicle-based MMSs. However, integration of aerial and ground data for 3D mapping suffers from difficulties such as tie-point matching among images from different platforms with large differences in perspective, coverage, and scale. Optimal fusion of the results from different platforms is also challenging. Therefore, this paper presents a novel method for the fusion of aerial, MMS, and backpack images and point clouds for optimized 3D mapping in urban areas. A geometric-aware model for feature matching is developed based on the SuperGlue algorithm to obtain sufficient tie-points between aerial and ground images, which facilitates the integrated bundle adjustment of images to reduce their geometric inconsistencies and the subsequent dense image matching to generate 3D point clouds from different image sources. After that, a graph-based method considering both geometric and texture traits is developed for the optimal fusion of point clouds from different sources to generate 3D mesh models of better quality. Experiments conducted on a challenging dataset in Hong Kong demonstrated that the geometric-aware model could obtain sufficient accurately matched tie-points among the aerial, MMS, and backpack images, which enabled the integrated bundle adjustment of the three image datasets to generate properly aligned point clouds. Compared with the results obtained from state-of-the-art commercial software, the 3D mesh models generated from the proposed point cloud fusion method exhibited better quality in terms of completeness, consistency, and level of detail.
KW - 3D mapping
KW - Aerial oblique imagery
KW - Backpack
KW - Mobile mapping system (MMS)
UR - http://www.scopus.com/inward/record.url?scp=85164997757&partnerID=8YFLogxK
U2 - 10.1016/j.isprsjprs.2023.07.010
DO - 10.1016/j.isprsjprs.2023.07.010
M3 - Journal article
AN - SCOPUS:85164997757
SN - 0924-2716
VL - 202
SP - 463
EP - 478
JO - ISPRS Journal of Photogrammetry and Remote Sensing
JF - ISPRS Journal of Photogrammetry and Remote Sensing
ER -