EDGAN: motion deblurring algorithm based on enhanced generative adversarial networks

Yong Zhang, Shao Yong Ma, Xi Zhang, Li Li, Wai Hung Ip, Kai Leung Yung

Research output: Journal article publicationJournal articleAcademic researchpeer-review

2 Citations (Scopus)


Removing motion blur has been an important issue in computer vision literature. Motion blur is caused by the relative motion between the camera and the photographed object. However, in recent years, some achievements have been made in the research of image deblurring by using deep learning algorithms. In this paper, an enhanced adversarial network model is proposed. The proposed model can use the weight of feature channel to generate sharp image and eliminate draughtboard artefacts. In addition, the mixed loss function enables the network to output high-quality image. The proposed approach is tested using GOPRO datasets and Lai datasets. In the GOPRO datasets, the peak signal-to-noise ratio of the proposed approach is up to 28.674, and DeblurGAN is 27.454. And the structural similarity measure can be achieved up to 0.969, and DeblurGAN is 0.939. Furthermore, the images were obtained from China’s Chang’e 3 Lander to test the new algorithm. Due to the elimination of the chessboard effect, the deblurred image has a better visual appearance. The proposed method achieved higher performance and efficiency in qualitative and quantitative aspects using the benchmark dataset experiments. The results also provided various insights into the design and development of the camera pointing system, which was mounted on the Lander for capturing images of the moon and rover for Chang’e space mission.

Original languageEnglish
Pages (from-to)8922–8937
Number of pages16
JournalJournal of Supercomputing
Issue number11
Publication statusPublished - 1 Nov 2020


  • Blurred image
  • Camera pointing system
  • Chang’e space mission
  • GANs
  • Resize convolution

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Information Systems
  • Hardware and Architecture

Cite this