TY - JOUR
T1 - Self-Supervised Depth Estimation Leveraging Global Perception and Geometric Smoothness
AU - Jia, Shaocheng
AU - Pei, Xin
AU - Yao, Wei
AU - Wong, S. C.
N1 - Funding Information:
This work was supported in part by the National Key Research and Development Program of China under Grant 2021YFC3001500; in part by the Key Program of National Natural Science Foundation of China under Grant U21B2089
Publisher Copyright:
© 2000-2011 IEEE.
PY - 2023/2/1
Y1 - 2023/2/1
N2 - Self-supervised depth estimation has drawn much attention in recent years as it does not require labeled data but image sequences. Moreover, it can be conveniently used in various applications, such as autonomous driving, robotics, realistic navigation, and smart cities. However, extracting global contextual information from images and predicting a geometrically natural depth map remain challenging. In this paper, we present DLNet for pixel-wise depth estimation, which simultaneously extracts global and local features with the aid of our depth Linformer block. This block consists of the Linformer and innovative soft split multi-layer perceptron blocks. Moreover, a three-dimensional geometry smoothness loss is proposed to predict a geometrically natural depth map by imposing the second-order smoothness constraint on the predicted three-dimensional point clouds, thereby realizing improved performance as a byproduct. Finally, we explore the multi-scale prediction strategy and propose the maximum margin dual-scale prediction strategy for further performance improvement. In experiments on the KITTI and Make3D benchmarks, the proposed DLNet achieves performance competitive to those of the state-of-the-art methods, reducing time and space complexities by more than 62% and 56% at a resolution of 416 × 128 , respectively. Extensive testing on various real-world situations further demonstrates the strong practicality and generalization capability of the proposed model.
AB - Self-supervised depth estimation has drawn much attention in recent years as it does not require labeled data but image sequences. Moreover, it can be conveniently used in various applications, such as autonomous driving, robotics, realistic navigation, and smart cities. However, extracting global contextual information from images and predicting a geometrically natural depth map remain challenging. In this paper, we present DLNet for pixel-wise depth estimation, which simultaneously extracts global and local features with the aid of our depth Linformer block. This block consists of the Linformer and innovative soft split multi-layer perceptron blocks. Moreover, a three-dimensional geometry smoothness loss is proposed to predict a geometrically natural depth map by imposing the second-order smoothness constraint on the predicted three-dimensional point clouds, thereby realizing improved performance as a byproduct. Finally, we explore the multi-scale prediction strategy and propose the maximum margin dual-scale prediction strategy for further performance improvement. In experiments on the KITTI and Make3D benchmarks, the proposed DLNet achieves performance competitive to those of the state-of-the-art methods, reducing time and space complexities by more than 62% and 56% at a resolution of 416 × 128 , respectively. Extensive testing on various real-world situations further demonstrates the strong practicality and generalization capability of the proposed model.
KW - 3D reconstruction
KW - Depth estimation
KW - linformer
KW - self-supervised learning
KW - visual odometry
UR - http://www.scopus.com/inward/record.url?scp=85141633662&partnerID=8YFLogxK
U2 - 10.1109/TITS.2022.3219604
DO - 10.1109/TITS.2022.3219604
M3 - Journal article
AN - SCOPUS:85141633662
SN - 1524-9050
VL - 24
SP - 1502
EP - 1517
JO - IEEE Transactions on Intelligent Transportation Systems
JF - IEEE Transactions on Intelligent Transportation Systems
IS - 2
ER -