This paper proposes a deep learning based unified and generalizable framework for accurate iris detection, segmentation and recognition. The proposed framework firstly exploits state-of-the-art and iris-specific Mask R-CNN, which performs highly reliable iris detection and primary segmentation i.e., identifying iris/non-iris pixels, followed by adopting an optimized fully convolutional network (FCN), which generates spatially corresponding iris feature descriptors. A specially designed Extended Triplet Loss (ETL) function is presented to incorporate the bit-shifting and non-iris masking, which are found necessary for learning meaningful and discriminative spatial iris features. Thorough experiments on four publicly available databases suggest that the proposed framework consistently outperforms several classic and state-of-the-art iris recognition approaches. More importantly, our model exhibits superior generalization capability as, unlike popular methods in the literature, it does not essentially require database-specific parameter tuning, which is another key advantage.
- Deep learning
- Iris recognition
- Spatially corresponding features
ASJC Scopus subject areas
- Signal Processing
- Computer Vision and Pattern Recognition
- Artificial Intelligence