Self-feature Learning: An Efficient Deep Lightweight Network for Image Super-resolution

Jun Xiao, Qian Ye, Rui Zhao, Kin Man Lam, Kao Wan

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

1 Citation (Scopus)

Abstract

Deep learning-based models have achieved unprecedented performance in single image super-resolution (SISR). However, existing deep learning-based models usually require high computational complexity to generate high-quality images, which limits their applications in edge devices, e.g., mobile phones. To address this issue, we propose a dynamic, channel-agnostic filtering method in this paper. The proposed method not only adaptively generates convolutional kernels based on the local information of each position, but also can significantly reduce the cost of computing the inter-channel redundancy. Based on this, we further propose a simple, yet effective, deep lightweight model for SISR. Experiment results show that our proposed model outperforms other state-of-the-art deep lightweight SISR models, leading to the best trade-off between the performance and the number of model parameters.

Original languageEnglish
Title of host publicationMM 2021 - Proceedings of the 29th ACM International Conference on Multimedia
PublisherAssociation for Computing Machinery, Inc
Pages4408-4416
Number of pages9
ISBN (Electronic)9781450386517
DOIs
Publication statusPublished - 17 Oct 2021
Event29th ACM International Conference on Multimedia, MM 2021 - Virtual, Online, China
Duration: 20 Oct 202124 Oct 2021

Publication series

NameMM 2021 - Proceedings of the 29th ACM International Conference on Multimedia

Conference

Conference29th ACM International Conference on Multimedia, MM 2021
Country/TerritoryChina
CityVirtual, Online
Period20/10/2124/10/21

Keywords

  • image processing
  • single image super-resolution

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Software
  • Computer Graphics and Computer-Aided Design

Cite this