Efficient Intra Bitrate Transcoding for Screen Content Coding Based on Convolutional Neural Network

Wei Kuang, Yui Lam Chan, Sik Ho Tsang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

3 Citations (Scopus)

Abstract

The Screen Content Coding (SCC) extension of High Efficiency Video Coding (HEVC) is developed to improve the coding efficiency of screen content videos. To meet the diverse network requirement of different clients, bitrate transcoding for SCC is desired. This problem can be solved by a conventional brute-force transcoder (CBFT) which concatenates an original decoder and an original encoder. However, it induces high computational complexity associated with the re-encoding part of CBFT. This paper presents a convolutional neural network based bitrate transcoder (CNN-BRT) for SCC. By utilizing information from both the decoder side and the encoder side, CNN-BRT makes a fast prediction for all coding units (CUs) of a coding tree unit (CTU) in a single test. At the decoder side, decoded optimal mode maps that reflect the optimal modes and CU partitions in a CTU is derived. At the encoder side, the raw samples in a CTU are collected. Then, they are fed to CNN-BRT to make a fast prediction. To imitate the optimal mode selection in the original re-encoding part, CNN-BRT involves a loss function that takes both of the sub-optimal modes and the final optimal modes into consideration. Compared with the HEVC-SCC reference software SCM-3.0, the proposed CNN-BRT reduces encoding time by 54.86% on average with a negligible Bjontegaard delta bitrate increase of 1.01% under all-intra configuration.

Original languageEnglish
Article number8787831
Pages (from-to)107211-107224
Number of pages14
JournalIEEE Access
Volume7
DOIs
Publication statusPublished - 5 Aug 2019

Keywords

  • convolutional neural network
  • fast algorithm
  • screen content coding (SCC)
  • Transcoding

ASJC Scopus subject areas

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)

Cite this