Mode Information Guided CNN for Quality Enhancement of Screen Content Coding

Ziyin Huang, Yui Lam Chan, Sik Ho Tsang, Kin Man Lam

Research output: Journal article publicationJournal articleAcademic researchpeer-review

1 Citation (Scopus)


Video quality enhancement methods are of great significance in reducing the artifacts of decoded videos in the High Efficiency Video Coding (HEVC). However, existing methods mainly focus on improving the quality of natural sequences, not for screen content sequences that have drawn more attention than ever due to the demands of remote desktops and online meetings. Different from the natural sequences encoded by HEVC, the screen content sequences are encoded by Screen Content Coding (SCC), an extension tool of HEVC. Therefore, we propose a Mode Information guided CNN (MICNN) to further improve the quality of screen content sequences at the decoder side. To exploit the characteristics of the screen content sequences, we extract the mode information from the bitstream as the input of MICNN. Furthermore, due to the limited number of screen content sequences, we establish a large-scale dataset to train and validate our MICNN. Experimental results show that our proposed MICNN can achieve 3.41% BD-rate saving on average. In addition, our MICNN method consumes acceptable computational time compared with the other video quality enhancement methods.

Original languageEnglish
Pages (from-to)24149-24161
Number of pages13
JournalIEEE Access
Publication statusPublished - Feb 2023


  • Convolutional neural network
  • HEVC
  • SCC
  • deep learning
  • quality enhancement

ASJC Scopus subject areas

  • General Engineering
  • General Materials Science
  • Electrical and Electronic Engineering
  • General Computer Science


Dive into the research topics of 'Mode Information Guided CNN for Quality Enhancement of Screen Content Coding'. Together they form a unique fingerprint.

Cite this