TY - GEN
T1 - Dual-view Attention Networks for Single Image Super-Resolution
AU - Guo, Jingcai
AU - Ma, Shiheng
AU - Zhang, Jie
AU - Zhou, Qihua
AU - Guo, Song
N1 - Funding Information:
We sincerely thank all reviewers for the efforts made in reviewing our manuscript. This work was financially supported by the National Natural Science Foundation of China (Grant No. 61872310).
Publisher Copyright:
© 2020 ACM.
PY - 2020/10/12
Y1 - 2020/10/12
N2 - One non-negligible flaw of the convolutional neural networks (CNNs) based single image super-resolution (SISR) models is that most of them are not able to restore high-resolution (HR) images containing sufficient high-frequency information. Worse still, as the depth of CNNs increases, the training easily suffers from the vanishing gradients. These problems hinder the effectiveness of CNNs in SISR. In this paper, we propose the Dual-view Attention Networks to alleviate these problems for SISR. Specifically, we propose the local aware (LA) and global aware (GA) attentions to deal with LR features in unequal manners, which can highlight the high-frequency components and discriminate each feature from LR images in the local and global views, respectively. Furthermore, the local attentive residual-dense (LARD) block that combines the LA attention with multiple residual and dense connections is proposed to fit a deeper yet easy to train architecture. The experimental results verified the effectiveness of our model compared with other state-of-the-art methods.
AB - One non-negligible flaw of the convolutional neural networks (CNNs) based single image super-resolution (SISR) models is that most of them are not able to restore high-resolution (HR) images containing sufficient high-frequency information. Worse still, as the depth of CNNs increases, the training easily suffers from the vanishing gradients. These problems hinder the effectiveness of CNNs in SISR. In this paper, we propose the Dual-view Attention Networks to alleviate these problems for SISR. Specifically, we propose the local aware (LA) and global aware (GA) attentions to deal with LR features in unequal manners, which can highlight the high-frequency components and discriminate each feature from LR images in the local and global views, respectively. Furthermore, the local attentive residual-dense (LARD) block that combines the LA attention with multiple residual and dense connections is proposed to fit a deeper yet easy to train architecture. The experimental results verified the effectiveness of our model compared with other state-of-the-art methods.
KW - convolutional neural networks
KW - dual-view aware attention
KW - highlight
KW - super-resolution
UR - http://www.scopus.com/inward/record.url?scp=85106156183&partnerID=8YFLogxK
U2 - 10.1145/3394171.3413613
DO - 10.1145/3394171.3413613
M3 - Conference article published in proceeding or book
AN - SCOPUS:85106156183
T3 - MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia
SP - 2728
EP - 2736
BT - Proceedings of the 28th ACM International Conference on Multimedia (ACM-MM)
PB - Association for Computing Machinery, Inc
T2 - 28th ACM International Conference on Multimedia, MM 2020
Y2 - 12 October 2020 through 16 October 2020
ER -