Wavelet-based eigentransformation for face super-resolution

Hui Zhuo, Kin Man Lam

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

4 Citations (Scopus)


In this paper, we propose a new approach to human face hallucination based on eigentransformation. In our algorithm, a face image is decomposed into different frequency bands using wavelet transform, so that different approaches can be applied to the low-frequency and high-frequency contents for increasing the resolution. The interpolated LR images are decomposed by the forward wavelet transform, whereby the low-frequency content is simply interpolated, while the wavelet coefficients of the three high-frequency bands are used to estimate the corresponding ones of the HR image by using eigentransformation. The approximation coefficients are reconstructed directly based on the content of the interpolated LR image. The reconstructed image can be synthesized by the inverse wavelet transform with all the estimated coefficients.
Original languageEnglish
Title of host publicationAdvances in Multimedia Information Processing, PCM 2010 - 11th Pacific Rim Conference on Multimedia, Proceedings
Number of pages9
EditionPART 2
Publication statusPublished - 9 Nov 2010
Event11th Pacific Rim Conference on Multimedia, PCM 2010 - Shanghai, China
Duration: 21 Sept 201024 Sept 2010

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 2
Volume6298 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference11th Pacific Rim Conference on Multimedia, PCM 2010


  • face hallucination
  • Face super-resolution
  • image magnification
  • wavelet transform

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science


Dive into the research topics of 'Wavelet-based eigentransformation for face super-resolution'. Together they form a unique fingerprint.

Cite this