TY - JOUR
T1 - Expanding Sparse LiDAR Depth and Guiding Stereo Matching for Robust Dense Depth Estimation
AU - Xu, Zhenyu
AU - Li, Yuehua
AU - Zhu, Shiqiang
AU - Sun, Yuxiang
N1 - Funding Information:
This work was supported in part by the National Natural Science Foundation of China under Grant U21B6001, in part by the National Key Research and Development Program of China under Grant 2018AAA0102700, in part by the Ten Thousand Talents Program of Zhejiang Province under Grant 2019R51010, and in part by the Stable Support Project of State Administration of Science, Technology and Industry for National Defence Grant, PRC under Grant HTKJ2019KL502005.
Publisher Copyright:
© 2016 IEEE.
PY - 2023/3/1
Y1 - 2023/3/1
N2 - Dense depth estimation is an important task for applications, such as object detection, 3-D reconstruction, etc. Stereo matching, as a popular method for dense depth estimation, has been faced with challenges when low textures, occlusions or domain gaps exist. Stereo-LiDAR fusion has recently become a promising way to deal with these challenges. However, due to the sparsity and uneven distribution of the LiDAR depth data, existing stereo-LiDAR fusion methods tend to ignore the data when their density is quite low or they largely differ from the depth predicted from stereo images. To provide a solution to this problem, we propose a stereo-LiDAR fusion method by first expanding the sparse LiDAR depth to a semi-dense depth with RGB image as reference. Then, based on the semi-dense depth, a varying-weight Gaussian guiding method is proposed to deal with the varying reliability of guiding signals. A multi-scale feature extraction and fusion method is further used to enhance the network, which shows superior performance over traditional sparse invariant convolution methods. Experimental results on different public datasets demonstrate our superior accuracy and robustness over the state of the arts.
AB - Dense depth estimation is an important task for applications, such as object detection, 3-D reconstruction, etc. Stereo matching, as a popular method for dense depth estimation, has been faced with challenges when low textures, occlusions or domain gaps exist. Stereo-LiDAR fusion has recently become a promising way to deal with these challenges. However, due to the sparsity and uneven distribution of the LiDAR depth data, existing stereo-LiDAR fusion methods tend to ignore the data when their density is quite low or they largely differ from the depth predicted from stereo images. To provide a solution to this problem, we propose a stereo-LiDAR fusion method by first expanding the sparse LiDAR depth to a semi-dense depth with RGB image as reference. Then, based on the semi-dense depth, a varying-weight Gaussian guiding method is proposed to deal with the varying reliability of guiding signals. A multi-scale feature extraction and fusion method is further used to enhance the network, which shows superior performance over traditional sparse invariant convolution methods. Experimental results on different public datasets demonstrate our superior accuracy and robustness over the state of the arts.
KW - AI-based methods
KW - Computer vision for automation
KW - sensor fusion
UR - http://www.scopus.com/inward/record.url?scp=85148300135&partnerID=8YFLogxK
U2 - 10.1109/LRA.2023.3240093
DO - 10.1109/LRA.2023.3240093
M3 - Journal article
AN - SCOPUS:85148300135
SN - 2377-3766
VL - 8
SP - 1479
EP - 1486
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 3
ER -