Rate-distortion model based bit allocation for 3-D facial compression using geometry video

Junhui Hou, Lap Pui Chau, Ying He, Minqi Zhang, Nadia Magnenat-Thalmann

Research output: Journal article publicationJournal articleAcademic researchpeer-review

9 Citations (Scopus)

Abstract

With the extensive applications of 3-D multimedia technology, 3-D content compression has been an important issue, which ensures its smooth transmission on the network with constrained bandwidth. In this letter, we propose a new compression framework for dynamic 3-D facial expressions. Taking advantage of the near-isometric property of human facial expressions, we parameterize the dynamic 3-D faces into an expression-invariant canonical domain, which naturally generates 2-D geometry videos and allows us to apply the well-studied video compression techniques. Due to the difference from natural videos, each dimension (i.e., X, Y and Z, respectively) of the geometry video is regarded as a video sequence and encoded separately. Meanwhile, a model-based joint bit allocation scheme is designed to allocate reasonable bitrate to each dimension by detailed analysis of rate-distortion model for geometry videos, to obtain optimal results under given target bitrate. Experimental results show that up to 25% improvement in terms of bitrate reduction can be achieved, compared to existing algorithms.

Original languageEnglish
Article number6470663
Pages (from-to)1537-1541
Number of pages5
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume23
Issue number9
DOIs
Publication statusPublished - Feb 2013
Externally publishedYes

Keywords

  • Dynamic 3-D facial expressions
  • geometry video
  • H.264/AVC
  • joint bit allocation
  • mesh compression
  • rate distortion model

ASJC Scopus subject areas

  • Media Technology
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Rate-distortion model based bit allocation for 3-D facial compression using geometry video'. Together they form a unique fingerprint.

Cite this