TY - JOUR
T1 - InconSeg: Residual-Guided Fusion with Inconsistent Multi-modal Data for Negative and Positive Road Obstacles Segmentation
AU - Feng, Zhen
AU - Guo, Yanning
AU - Navarro-Alarcon, David
AU - Lyu, Yueyong
AU - Sun, Yuxiang
N1 - Funding Information:
This work was supported in part by the National Natural Science Foundation of China under Grant 62003286, in part by Zhejiang Lab under Grant 2021NL0AB01, in part by the Guangdong Basic and Applied Basic Research Foundation under Grant 2022A1515010116, in part by CCF-Baidu Open Fund under Grant 182215PCK04183, and in part by the Start-up Fund of HK PolyU under Grant P0034801.
Publisher Copyright:
© 2016 IEEE.
PY - 2023/8/1
Y1 - 2023/8/1
N2 - Segmentation of road obstacles, including negative and positive obstacles, is critical to the safe navigation of autonomous vehicles. Recent methods have witnessed an increasing interest in using multi-modal data fusion (e.g., RGB and depth/disparity images). Although improved segmentation accuracy has been achieved by these methods, we still find that their performance could be easily degraded if the two modalities have inconsistent information, for example, distant obstacles that can be viewed in RGB images but cannot be viewed in depth/disparity images. To address this issue, we propose a novel two-encoder-two-decoder RGB-depth/disparity multi-modal network with Residual-Guided Fusion modules. Different from most existing networks that fuse feature maps in encoders, we fuse feature maps in decoder. We also release a large-scale RGB-depth/disparity dataset recorded in both urban and rural environments with manually-labeled ground truth for both negative- and positive-obstacles segmentation. Extensive experimental results demonstrate that our network achieves state-of-the-art performance compared with other networks.
AB - Segmentation of road obstacles, including negative and positive obstacles, is critical to the safe navigation of autonomous vehicles. Recent methods have witnessed an increasing interest in using multi-modal data fusion (e.g., RGB and depth/disparity images). Although improved segmentation accuracy has been achieved by these methods, we still find that their performance could be easily degraded if the two modalities have inconsistent information, for example, distant obstacles that can be viewed in RGB images but cannot be viewed in depth/disparity images. To address this issue, we propose a novel two-encoder-two-decoder RGB-depth/disparity multi-modal network with Residual-Guided Fusion modules. Different from most existing networks that fuse feature maps in encoders, we fuse feature maps in decoder. We also release a large-scale RGB-depth/disparity dataset recorded in both urban and rural environments with manually-labeled ground truth for both negative- and positive-obstacles segmentation. Extensive experimental results demonstrate that our network achieves state-of-the-art performance compared with other networks.
KW - Negative obstacles
KW - autonomous vehicles
KW - multi-modal fusion
KW - road obstacles
KW - semantic segmentation
UR - http://www.scopus.com/inward/record.url?scp=85159704258&partnerID=8YFLogxK
U2 - 10.1109/LRA.2023.3272517
DO - 10.1109/LRA.2023.3272517
M3 - Journal article
AN - SCOPUS:85159704258
SN - 2377-3766
VL - 8
SP - 4871
EP - 4878
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 8
ER -