TY - GEN
T1 - Deep multi-model fusion for single-image dehazing
AU - Deng, Zijun
AU - Zhu, Lei
AU - Hu, Xiaowei
AU - Fu, Chi Wing
AU - Xu, Xuemiao
AU - Zhang, Qing
AU - Qin, Jing
AU - Heng, Pheng Ann
N1 - Funding Information:
Acknowledgments. The work is supported by CUHK Research Committee Funding (Direct Grants) under project code - 4055103, Research Grants Council of the Hong Kong Special Administrative Region (No. CUHK 14201717), Science and Technology Plan Project of Guangzhou (No.201704020141), Shenzhen Science and Technology Program (Project no. JCYJ20170413162617606), NSFC (Grant No. 61772206, U1611461, 61472145), Guangdong R&D key project of China (Grant No. 2018B010107003), Guangdong High-level personnel program (Grant No. 2016TQ03X319), Guangdong NSF (Grant No. 2017A030311027), and Guangzhou key project in industrial technology (Grant No. 201802010027).
Publisher Copyright:
© 2019 IEEE.
PY - 2019/10
Y1 - 2019/10
N2 - This paper presents a deep multi-model fusion network to attentively integrate multiple models to separate layers and boost the performance in single-image dehazing. To do so, we first formulate the attentional feature integration module to maximize the integration of the convolutional neural network (CNN) features at different CNN layers and generate the attentional multi-level integrated features (AMLIF). Then, from the AMLIF, we further predict a haze-free result for an atmospheric scattering model, as well as for four haze-layer separation models, and then fuse the results together to produce the final haze-free image. To evaluate the effectiveness of our method, we compare our network with several state-of-the-art methods on two widely-used dehazing benchmark datasets, as well as on two sets of real-world hazy images. Experimental results demonstrate clear quantitative and qualitative improvements of our method over the state-of-the-arts.
AB - This paper presents a deep multi-model fusion network to attentively integrate multiple models to separate layers and boost the performance in single-image dehazing. To do so, we first formulate the attentional feature integration module to maximize the integration of the convolutional neural network (CNN) features at different CNN layers and generate the attentional multi-level integrated features (AMLIF). Then, from the AMLIF, we further predict a haze-free result for an atmospheric scattering model, as well as for four haze-layer separation models, and then fuse the results together to produce the final haze-free image. To evaluate the effectiveness of our method, we compare our network with several state-of-the-art methods on two widely-used dehazing benchmark datasets, as well as on two sets of real-world hazy images. Experimental results demonstrate clear quantitative and qualitative improvements of our method over the state-of-the-arts.
UR - http://www.scopus.com/inward/record.url?scp=85081915270&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2019.00254
DO - 10.1109/ICCV.2019.00254
M3 - Conference article published in proceeding or book
AN - SCOPUS:85081915270
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 2453
EP - 2462
BT - Proceedings - 2019 International Conference on Computer Vision, ICCV 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 17th IEEE/CVF International Conference on Computer Vision, ICCV 2019
Y2 - 27 October 2019 through 2 November 2019
ER -