A highly efficient compression framework for time-varying 3-D facial expressions

Junhui Hou, Lap Pui Chau, Minqi Zhang, Nadia Magnenat-Thalmann, Ying He

Research output: Journal article publicationJournal articleAcademic researchpeer-review

26 Citations (Scopus)


The rapid recent development of 3-DTV technology has led to an increase in studies on mesh-based 3-D scene representation. Compressing 3-D time-varying meshes is critical for the storage and transmission of 3-D contents. This paper proposes a highly efficient framework for compressing time-varying 3-D facial expressions. We use the near-isometric property of human facial expressions to parameterize the 3-D dynamic faces into an expression-invariant 2-D canonical domain that will naturally generate 2-D geometry videos (GVs). Considering the intrinsic properties of GVs, we apply low-rank and sparse matrix decomposition (LRSMD) separately to three dimensions of GVs (namely, X, Y, and Z). Based on our high precision rate and distortion models for GVs, we further compress the components from LRSMD using a video encoder in which bitrates of all components are assigned optimally according to the target bitrate. Experimental results show that the proposed scheme can significantly improve compression performance in terms of rate-distortion performance and visual quality compared with the state-of-the-art algorithms.

Original languageEnglish
Article number6778752
Pages (from-to)1541-1553
Number of pages13
JournalIEEE Transactions on Circuits and Systems for Video Technology
Issue number9
Publication statusPublished - 1 Sept 2014
Externally publishedYes


  • Geometry video (GV)
  • low-rank and sparse matrix decomposition (LRSMD)
  • optimal bit allocation
  • rate and distortion models
  • time-varying 3-D mesh

ASJC Scopus subject areas

  • Media Technology
  • Electrical and Electronic Engineering


Dive into the research topics of 'A highly efficient compression framework for time-varying 3-D facial expressions'. Together they form a unique fingerprint.

Cite this