Pore-scale facial features matching under 3D morphable model constraint

Xianxian Zeng, Dong Li, Yun Zhang, Kin Man Lam

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

2 Citations (Scopus)

Abstract

2017. Similar to irises and fingerprints, pore-scale facial features are effective features for distinguishing human identities. Recently, the local feature extraction based on deep network architecture has been proposed, which needs a large dataset for training. However, there are no large databases for pore-scale facial features. Actually, it is hard to set up a large pore-scale facial-feature dataset, because the images from existing high-resolution face databases are uncalibrated and nonsynchronous, and human faces are nonrigid. To solve this problem, we propose a method to establish a large pore-to-pore correspondence dataset. We adopt Pore Scale-Invariant Feature Transform (PSIFT) to extract pore-scale facial features from face images, and use 3D Dense Face Alignment (3DDFA) to obtain a fitted 3D morphable model, which is constrained by matching keypoints. From our experiments, a large pore-to-pore correspondence dataset, including 17,136 classes of matched pore-keypoint pairs, is established.
Original languageEnglish
Title of host publicationComputer Vision - 2nd CCF Chinese Conference, CCCV 2017, Proceedings
PublisherSpringer Verlag
Pages29-39
Number of pages11
ISBN (Print)9789811073014
DOIs
Publication statusPublished - 1 Jan 2017
Event2nd Chinese Conference on Computer Vision, CCCV 2017 - Tianjin, China
Duration: 11 Oct 201714 Oct 2017

Publication series

NameCommunications in Computer and Information Science
Volume772
ISSN (Print)1865-0929

Conference

Conference2nd Chinese Conference on Computer Vision, CCCV 2017
CountryChina
CityTianjin
Period11/10/1714/10/17

Keywords

  • 3D morphable model
  • 3DDFA
  • Dataset
  • Pore-scale facial features
  • PSIFT

ASJC Scopus subject areas

  • Computer Science(all)
  • Mathematics(all)

Cite this