Efficient and Effective Context-Based Convolutional Entropy Modeling for Image Compression

Mu Li, Kede Ma, Jia You, David Zhang

Research output: Journal article publicationJournal articleAcademic researchpeer-review


Precise estimation of the probabilistic structure of natural images plays an essential role in image compression. Despite the recent remarkable success of end-to-end optimized image compression, the latent codes are usually assumed to be fully statistically factorized in order to simplify entropy modeling. However, this assumption generally does not hold true and may hinder compression performance. Here we present context-based convolutional networks (CCNs) for efficient and effective entropy modeling. In particular, a 3D zigzag scanning order and a 3D code dividing technique are introduced to define proper coding contexts for parallel entropy decoding, both of which boil down to place translation-invariant binary masks on convolution filters of CCNs. We demonstrate the promise of CCNs for entropy modeling in both lossless and lossy image compression. For the former, we directly apply a CCN to the binarized representation of an image to compute the Bernoulli distribution of each code for entropy estimation. For the latter, the categorical distribution of each code is represented by a discretized mixture of Gaussian distributions, whose parameters are estimated by three CCNs. We then jointly optimize the CCN-based entropy model along with analysis and synthesis transforms for rate-distortion performance. Experiments on the Kodak and Tecnick datasets show that our methods powered by the proposed CCNs generally achieve comparable compression performance to the state-of-the-art while being much faster.
Original languageEnglish
Pages (from-to)5900 - 5911
JournalIEEE Transactions on Image Processing
Publication statusPublished - Apr 2020


Dive into the research topics of 'Efficient and Effective Context-Based Convolutional Entropy Modeling for Image Compression'. Together they form a unique fingerprint.

Cite this