Automated segmentation of iris images using visible wavelength face images

Chun Wei Tan, Ajay Kumar Pathak

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

37 Citations (Scopus)

Abstract

Remote human identification using iris biometrics requires the development of automated algorithm of the robust segmentation of iris region pixels from visible face images. This paper presents a new automated iris segmentation framework for iris images acquired at-a-distance using visible imaging. The proposed approach achieves the segmentation of iris region pixels in two stages, i.e. (i) iris and sclera classification, and (ii) post-classification processing. Unlike the traditional edge-based segmentation approaches, the proposed approach simultaneously exploits the discriminative color features and localized Zernike moments to perform pixel-based classification. Rigorous experimental results presented in this paper confirm the usefulness of the proposed approach and achieve improvement of 42.4% in the average segmentation errors, on UBIRIS.v2 dataset, as compared to the previous approach.
Original languageEnglish
Title of host publication2011 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2011
DOIs
Publication statusPublished - 31 Oct 2011
Event2011 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2011 - Colorado Springs, CO, United States
Duration: 20 Jun 201125 Jun 2011

Conference

Conference2011 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2011
Country/TerritoryUnited States
CityColorado Springs, CO
Period20/06/1125/06/11

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Automated segmentation of iris images using visible wavelength face images'. Together they form a unique fingerprint.

Cite this