A novel kernel-based framework for facial-image hallucination

Yu Hu, Kin Man Lam, Tingzhi Shen, Weijiang Wang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

11 Citations (Scopus)

Abstract

In this paper, we present a kernel-based eigentransformation framework to hallucinate the high-resolution (HR) facial image of a low-resolution (LR) input. The eigentransformation method is a linear subspace approach, which represents an image as a linear combination of training samples. Consequently, those novel facial appearances not included in the training samples cannot be super-resolved properly. To solve this problem, we devise a kernel-based extension of the eigentransformation method, which takes higher-order statistics of the image data into account. To generate HR face images with higher fidelity, the HR face image reconstructed using this kernel-based eigentransformation method is treated as an initial estimation of the target HR face. The corresponding high-frequency components of this estimation are extracted to form a prior in the maximum a posteriori (MAP) formulation of the SR problem so as to derive the final reconstruction result. We have evaluated our proposed method using different kernels and configurations, and have compared these performances with some current SR algorithms. Experimental results show that our kernel-based framework, along with a proper kernel, can produce good HR facial images in terms of both visual quality and reconstruction errors.
Original languageEnglish
Pages (from-to)219-229
Number of pages11
JournalImage and Vision Computing
Volume29
Issue number4
DOIs
Publication statusPublished - 1 Jan 2011

Keywords

  • Eigentransformation
  • Face hallucination
  • Image super-resolution
  • Kernel method

ASJC Scopus subject areas

  • Signal Processing
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'A novel kernel-based framework for facial-image hallucination'. Together they form a unique fingerprint.

Cite this