DOI QR코드

DOI QR Code

Iris Image Enhancement for the Recognition of Non-ideal Iris Images

  • Sajjad, Mazhar (Department of Computer Software Korea University of Science, and Technology (UST)) ;
  • Ahn, Chang-Won (Department of Computer Software Korea University of Science, and Technology (UST)) ;
  • Jung, Jin-Woo (Department of Computer Science and Engineering Dongguk University)
  • 투고 : 2015.07.17
  • 심사 : 2016.03.03
  • 발행 : 2016.04.30

초록

Iris recognition for biometric personnel identification has gained much interest owing to the increasing concern with security today. The image quality plays a major role in the performance of iris recognition systems. When capturing an iris image under uncontrolled conditions and dealing with non-cooperative people, the chance of getting non-ideal images is very high owing to poor focus, off-angle, noise, motion blur, occlusion of eyelashes and eyelids, and wearing glasses. In order to improve the accuracy of iris recognition while dealing with non-ideal iris images, we propose a novel algorithm that improves the quality of degraded iris images. First, the iris image is localized properly to obtain accurate iris boundary detection, and then the iris image is normalized to obtain a fixed size. Second, the valid region (iris region) is extracted from the segmented iris image to obtain only the iris region. Third, to get a well-distributed texture image, bilinear interpolation is used on the segmented valid iris gray image. Using contrast-limited adaptive histogram equalization (CLAHE) enhances the low contrast of the resulting interpolated image. The results of CLAHE are further improved by stretching the maximum and minimum values to 0-255 by using histogram-stretching technique. The gray texture information is extracted by 1D Gabor filters while the Hamming distance technique is chosen as a metric for recognition. The NICE-II training dataset taken from UBRIS.v2 was used for the experiment. Results of the proposed method outperformed other methods in terms of equal error rate (EER).

키워드

1. Introduction

With increasing security concerns, many traditional methods of user identification, such as a physical key, an ID card, or a secret password, become obsolete as each method has its own problems. Biometric identification uses iris, face, vein, and fingerprint recognition. Iris recognition is considered as the most reliable biometric identification system in high-sensitivity situations. Using unique iris patterns between the pupil sclera for personnel identification, iris recognition shows a high recognition accuracy owing to high degree of freedom of iris patterns [1][2][3][4][5][6]. The human iris, a thin circular structure in the eye, is situated between the cornea and the lens with many microscopic features, as shown in Fig. 1. Iris recognition systems use Daugman’s algorithm because it ensures high accuracy in recognition rates. However, researchers need to address the issues of system robustness, performance under variability, speed of recognition, and non-ideal images. There is currently growing interest in biometric user identification fromboth academia and industry [7]. A recent improvement in iris recognition technologies involves acquisition of the iris images at-a-distance and under a less constrained environment using visible illumination imaging [8] [9]. These systems are required for forensic and high-security surveillance applications.

Fig. 1.Various parts of human iris

Iris recognition systems available for user identification require near-infrared illumination, but because near-infrared illumination measurement is non-friendly to users, it also requires the user’s cooperation to capture a good quality iris image. These formalities exert an additional burden on the user to participate actively for his/her recognition. Therefore, there are some research issues to reduce this burden on users as much as possible by the improvement of iris recognition performance in semi- and un-constrained environments [8]. Fig. 2 shows examples of noisy iris images with each affected by various factors, such as low illumination, blurring, rotation, occlusion by eyelids and eyelashes, off-angle, and noises. The recognition with these non-ideal images is drastically degraded. Therefore, proper pre-processing method play a major role in contributing to the accuracy of iris recognition systems.

Fig. 2.Noisy iris images (NICE.II Training dataset). (a) Off-angle. (b) Low illumination. (c) Occlusion by eyelashes. (d) Occlusion by eyelids. (e) Blurring. (f) Rotation. (g) Occlusion by ghost region. (h) Effect of contraction and dilation.

First, Iris segmentation is considered a prerequisite for an accurate iris recognition system especially in non-ideal iris images. The presence of various kind of noises interferes with the ability to effectively segment the iris part and deteriorates the recognition accuracy. In previous research, geometric approaches such as location of pupil, eyelids have been used for iris localization [10], while contraction and stretching properties of the pupil and iris part have also been used [11]. To determine whether the iris is being occluded by eyelashes, Fourier transforms have been used in [12]. Other methods are like image intensity differences between iris region and eyelashes [13][14]. These works of iris segmentation are suited to ideal iris images only. To accurately segment iris regions from non-ideal iris images, a method for iris localization is proposed which uses the circular edge detection adopted in [15].

Second, iris image enhancement methods are using various iris databases, such as CASIA-IrisV1 [16] and IrisBase for high-quality images or CASIA-IrisV3 [17] and UBRISV.2 [18] for non-ideal images. Current methods for enhancement of iris images use super-resolution for reconstruction of blurry or low-resolution images [1][2][3][4]. Other methods used for enhancing the image sharpness as well as illumination and noise elimination of iris images include traditional deblurring [19], denoising [20], focus correction [21], histogram equalization, contrast stretching, unsharp masking, homomorphic filtering [22], entropy normalization [23], and background subtraction [24]. Some recent algorithmic methods [25] [26] have used the machine learning technique SVM (support vector machine) training to reduce noise and blur, and enhance the illumination. While some of the existing techniques for iris image enhancement are very fast, they concentrate on a single degrading factor (such as noise, blur, or illumination). Others use multi-factor image enhancement but are relatively slow [18] and/or require substantial training data involved [1][2].

The segmented and localized iris image still has non-uniform illumination caused by the light position, low contrast and eyelids occlusion etc. which will affect the recognition rate. In this paper a novel algorithm that enhances the sharpness and illumination of iris image is proposed. After extracting the features representing iris patterns are matched up with the previously stored database. Finally, the decision is made by the corresponding matching results.

 

2. Related Work

Iris images received at a distance, especially under a less constrained environment under visible illumination, usually contain more noise as compared to images received from a close distance under NIR illumination. These images are influenced by various sources of noise, such as motion, blur, low illumination, off angle, and occlusion from eyelashes and eyelids, as shown in Fig. 2. Therefore, there is a need for development of an accurate iris recognition method that can handle all kinds of noisy iris images. Previous studies related to iris encoding and matching algorithms, such as [27] [28] [29] [30], used iris images acquired from constrained environments using NIR illumination. These methods may not be so effective at recovering iris features that are embedded in such degraded iris images affected by imaging variations. The method described in [27] tries to estimate the consistent bits in the iris codes while excluding inconsistent bits. These approaches have shown excellent performance at close distance but are not feasible for the large imaging variations that are observed in visible illumination images obtained at a distance, as in our proposed method. The method in [28] is an extension of [27] applying a weighty approach to quantify each bit in the iris code such that that consistent bits are assigned higher values while inconsistent bits have lower values.

This method seems to be more noise tolerant, but has the same limitations as those in [27]. In previous research, some excellent work has been done to develop more advanced iris segmentation approaches for noisy iris images acquired using visible images [18][31][32]. However, very limited research work is available that can accommodate imaging variation in segmented noisy iris images like in NICE.II competition [33]. Moreover, preprocessing algorithms, which detect the noise in images as well as the boundary of the pupil and iris, have been used to enhance iris recognition performance [2][38][34][35]. Similarly, the methods used for enhancing the image sharpness as well as illumination include traditional deblurring [26], histogram equalization, contrast stretching, unsharp masking, homomorphic filtering [22], and background subtraction [24]. To reduce noise, blur and enhance the illumination, some recent methods [25] [26] have used the SVM (support vector machine). These localization and enhancement methods can’t handle the degraded quality non-ideal iris images. Therefore, method for iris localization and enhancement is needed to improve the recognition accuracy of these iris images.

In Table 1, we summarize the recent state-of-the-art work on iris encoding and matching techniques from iris biometric recognition research work. It seems that none of the previous work has provided effective strategies in terms of recognition accuracy as well as computational processing for iris recognition acquired at a distance and facing non-ideal iris images.

Table 1.Brief comparison of the proposed method with existing iris recognition approaches

 

3. Proposed Method

An overview of the proposed method is shown in Fig. 3. We first got segmented iris noisy images from NICE-II database [18]. The dataset contains non-ideal images that have various kinds of noises. In order to obtain only the iris part for further processing, we adopted the method of Jeong et al. [15]. First, the localized method properly segmented the iris region in non-ideal iris images. Further, a novel enhancement algorithm is introduced to reduce the effects of non-illumination and low contrast of the localized and normalized iris images. Finally, extract the features that representing iris patterns from normalized image.

Fig. 3.Procedure of the proposed method.

3.1. Image Acquisition

The eye image to be analyzed must be acquired in a digital form that is suitable for analysis. Further, we will be using the NICE-II (Noisy Iris Challenge Evaluation – Part II) database from UBIRIS developed within the SOCIA Lab (Soft Computing and Image Analysis Group) of the University of Beira Interior (Portugal) [18].

3.2. Image Preprocessing

The acquired image contains the iris part along with some irrelevant parts, such as the eyelids and pupil. In some conditions, the iris image is in low contrast and brightness is not uniformly distributed. In addition, different distances from the eye to the camera may result in different image sizes for the same person’s eye. To improve the quality of iris image, our preprocessing method composed of three steps as described in the following sections. In the first step, we use the iris localization method to accurately detect the iris radius and center, next the length of the inner and outer iris boundaries is adjusted to achieve normalized iris image. In the last step, we propose a new algorithm to enhance the iris image by reducing the effects of non-uniform illumination and low contrast.

3.2.1. Iris Localization

The segmented iris images are provided by the NICE.II dataset as shown in Fig. 4(b), but the pupil and iris pixel positions are not provided. In order to detect the pupil and iris boundaries, our localization consists of three parts. To find the accurate centers, the iris center is calculating first by the geometric center of the black pixels. The radius of the iris region is calculated based on half of the difference between left and rightmost position of the pixels in the black region these values we select the left and right most position at which the gray value changes from white toward black and black toward white respectively.

Fig. 4.Detection of pupil and iris boundaries: (a) Original eye image. (b) Segmented image from NICE-II. (c) Result of pupil and iris boundaries.

To detect the pupil radius and center, we are using the white pixels inside the black pixels as shown in Fig. 4(b). By using a component labelling algorithm [22], the detected rough position of the iris center is regarded as pupil area. Using the pupil area, the accurate pupil center is calculated. As with iris radius based on the leftmost and rightmost position of the current pupil area, the exact pupil radius is obtained. To determine the accurate iris center and radius, we are using the detected rough center position and radius center. The circular edge detector is used to accurately detect the iris center and radius [15]. The following integrodifferential operation is used to accurately determine the center and radius of iris region.

Where ‘r’ is the radius of iris region, (x0, y0) is the center position of the iris region, and I(x, y) is the grey level on the circular boundary of the iris. Using Eq. (1), we perform two integrodifferential operations in the range of -45 to 30 radians on the right side and 150 to 225 radians on the left side. The reason for selecting only these two regions is because other regions can be hidden owing to eyelids, which can produce errors [15].

Fig. 4(c) shows the accurate boundaries of iris and pupil regions, the non-iris regions appear as white pixels while free of noise regions appear as black pixels. The error rate is measured to find the difference in pixels between segmented image Fig. 4(b) and resultant localized iris image Fig. 4(c). The error rate is measured as the proportion of difference in corresponding pixels between the output image O(c′, r′) obtained from the localization method and C(c′, r′) segmented image [15] [42].

In Eq. (2), n is the total number of images in dataset. The Ei values vary between 0 and 1 showing the accuracy of iris segmentation. The iris segmentation accuracy would be best if the Ei value is 0 and worst if the value is 1. Another evaluation criteria Ej using Eq. (3), which will compute the average of the false positive rate (FPR) and false negative rate (FNR) [15]. The FPR means the error rate of accepting non-iris pixels such as eyelashes as iris region and FNR is the error rate of rejecting iris pixels as non-iris region.

3.2.2. Normalization

Human irises from different people may be captured in different sizes with varying imaging distances, and even irises from the same person may be different. The radial size of the pupil may change owing to variations in illuminations and variations of the inner boundary owing to the dilation and contraction of the pupil region. This deformation of the image texture will affect the performance of the feature extraction and pattern matching process. In order to reduce these anomalies and obtain a normalized iris image, the lengths of the inner and outer iris boundaries are adjusted in order to achieve more accurate recognition results.

The detected iris region shown in Fig. 5(b) in Cartesian coordinates is transformed into polar coordinates using Daugman’s normalization method, defined by Eq. (4) and (5). The polar coordinates result is shown in Fig. 5(c). The polar coordinates of the iris image are divided into eight tracks and 256 sectors [43]. In each track and sector the pixel values are averaged in the vertical direction based on a 1-D Gaussian kernel as shown in Fig. 5(d). As a result, a normalized iris image of 256 × 8 pixels is produced, which reduces the errors that arise while segmenting the iris image. The proposed method is capable of compensating the unwanted variations owing to the distance between the eye and the camera and the eye position with respect to the camera [44].

Where

Fig. 5.Iris image normalization steps: (a) Original image. (b) Segmented iris region (c) Normalized image of 256 × 8. (d) Track and sector of averaged pixel values in vertical direction [43].

In Eq. (6), (7), (8) and (9), (x, y) and (ρ, θ) are the Cartesian and normalized polar coordinates, respectively. xp(θ), yp(θ) and xi(θ), yi(θ) are the coordinates of the pupil and iris boundaries along the θ direction, respectively. xp0(θ), xp0(θ) and xi0(θ), yi0(θ) are the coordinates of the pupil and iris centers along the θ direction, respectively. rp and ri are the pupil and iris radius, respectively.

3.2.3. Iris Enhancement

The segmented and localized iris image still has non-uniform illumination caused by the light position, low contrast, eyelids occlusion, and noise, which will affect the recognition performance. The iris image enhancement plays a major role in the accuracy of iris recognition system. We propose a novel algorithm that enhances the sharpness and illumination of the iris image. The propose algorithm showed an improvement in the verification rate when compared with other popular methods for image enhancement. To reduce the effect of non-uniform illumination and low contrast, we used the following pre-processing steps to enhance the iris image, as shown in Fig. 6.

Fig. 6.Iris image pre-processing algorithm for image enhancement.

A. Enhancement Algorithm Steps

B. Contrast-limited Adaptive Histogram Equalization Algorithm

To enhance the low contrast of the iris image during our enhancement algorithm, we use an advanced contrast technique called CLAHE in our enhancement algorithm shown in Fig. 6. Unlike previous works, we apply CLAHE method to already well-distributed texture image using Bi-linear interpolation as described in enhancement algorithm in section A and further use histogram stretching to improve the result of CLAHE. In this algorithm, the image is subdivided into image tiles also called contextual region and then contrast of the image is enhanced within each region. The result of the pre-processing steps is shown in Fig. 7.

Fig. 7.Proposed preprocessing steps: (a). Iris region in original image. (b) Extracted iris image (c) Enhanced normalized image. (d) Enhanced original gray image after proposed method.

 

4. Feature Extraction and Pattern Matching

4.1. Feature Extraction

In order to provide accurate recognition of individuals, the most unique and invariant features from the iris must be extracted. Only the significant features of the iris must be encoded so that comparisons between templates can be made. We are using the 1D Gabor wavelet filter with the normalized iris image [15] [42] [46]. If the iris image has rich texture information of high contrast then the Gabor filter gives good performance. In order to get recognition accuracy, we select the possible optimal size and frequency by experiments. As we have eight tracks in a normalized iris image, using the 1D Gabor filter the convolution of intensity is performed in each track in the horizontal direction. As shown in Fig. 5(c), the width of the normalized iris image consist of 256 pixels, so the convolution is performed at each pixel position of the entire width of each track of the normalized iris image. As a result, we obtained the magnitude value of 256 pixels of each track by using the 1D Gabor filter [47].

In Eq. (10) and (11) the σ and µ0 are the kernel size and frequency of the Gabor filter. A denotes the amplitude of the Gabor filter (G(x)) and x0 denotes the movement amount of the Gabor filter coefficient. In order to get the normalization of the filtering coefficient, the DC is set to zero, which means the summation of the entire coefficient if the Gabor filter is set to zero. The value obtained from the Gabor filter calculation is checking for positive or negative value. The magnitude value will be considered 1 if the magnitude is less than or equal to zero, otherwise it will be zero. As a result, we got 256 bits in each track and as each normalized image consists of eight tracks there is a total of 2048 bits (256 × 8) [48]. All noisy iris regions, iris regions that are occluded by eyelids or eyelashes, and iris regions where it is difficult to find the boundaries are considered to be unreliable and are not used for iris code matching. After selecting the optimal kernel size and frequency, the Gabor filter is expected to obtain the minimum FAR (False Accepting Rate), FRR (False Rejecting Rate), and EER by testing the NICE-II database.

4.2. Pattern Matching

For matching, the hamming distance was chosen as a metric for recognition since bitwise comparisons were necessary. The hamming distance algorithm employed also incorporates noise masking, so that only significant bits are used in calculating the hamming distance between two iris templates. When taking the hamming distance, only those bits in the iris pattern that correspond to ‘0’ bits in noise masks of both iris patterns will be used in the calculation. Although, in theory, two iris templates generated from the same iris will have a hamming distance of 0.0, in practice this will not occur. Normalization is not perfect and there will be some noise that goes undetected, so some variation will be present when comparing two intra-class iris templates.

In order to account for rotational inconsistencies, when the hamming distance of two templates is calculated [3], one template is shifted left and right bit-wise and a number of hamming distance values are calculated from successive shifts. This bit-wise shifting in the horizontal direction corresponds to rotation of the original iris region by an angle given by the angular resolution used. If an angular resolution of 180 degree is used, each shift will correspond to a rotation of 2 degrees in the iris region. The Hamming distance will be calculated using only the bits generated from the true iris region and this modified Hamming distance formula is given as:

In Eq.(12), the Xj and Yj are the two bit-wise templates to compare, Xn,j and Yn,j are the corresponding noise masks for Xj and Yj, and N is the number of bits represented by each template.

 

5. Experimental Results

5.1. Data Set

In order to test our proposed method, we used the NICE-II (Noisy Iris Challenge Evaluation-PartII) dataset taken from UBIRIS.v2, developed within the SOCIA lab of the University of Beira Interior (Portugal) [18]. NICE-II is exclusively focused on the signature encoding and matching stages of degraded visible wavelength iris images previously segmented in the segmentation contest of NICE-I. The aim of this second version of the UBIRIS database (UBIRIS.v2) is to realistically simulate less constrained image capturing conditions, either at-a-distance, on-the move, or with minimal user cooperation.

When compared to its predecessor, this database contains more images and with more realistic noise factors, namely: poorly focused iris images. The dataset consist of 1000 images, including 191 different classes. During the experiment, the number of reliable tests is 3593 while there were 495,907 fake tests. The dataset consists of poor- quality images owing to motion blur, off angle, low illumination, rotation, occlusion by ghost regions, and the effects of contraction and dilation, as shown in Fig. 2.

5.2. Results Analysis

In this section, we test the performance of our proposed algorithm. The training data set is manually divided into good and bad quality images in each class. First, we performed experiments for iris segmentation. We applied our proposed algorithm of iris image localization, to obtain the exact boundaries of the iris and pupil of all dataset images. Using our NICE-II iris dataset that contains various kinds of noisy images, Table 2 gives the iris segmentation accuracy.

Table 2.Iris segmentation accuracy using NICE.II dataset

Our image localization method failed in those iris images where the boundaries can’t be determined easily or the eyelashes and eyelids can’t be segmented properly. A sample of such images is shown in Fig.8 (a) to (l). There are various detection problems like the occlusion by lower or upper eyelids and eyelashes, shaded regions in the upper parts of the iris region, ghosting region, off-angle, rotation, caused by the discontinuous frames of the user’s glasses and color segmentation error because of local iris region.

Fig. 8.NICE-II iris images dataset sample on which our segmentation failed to perform

Our iris localization method properly segmented the iris region in many non-ideal iris images.

As shown in Fig. 9 (a) to (h) where all the images are non-ideal, but our method succeeded to properly separate the iris region.

Fig. 9.NICE-II iris images dataset sample where our method properly segmented

5.2.1. Evaluation Approach

In our second experiment, the matrices used for the quantitative evaluation of the proposed segmentation and enhanced algorithms are the following:

(1) False acceptance rate (FAR): The error rate of accepting an imposter as an authorized user [49].

(2) False rejection rate (FRR): The error rate of rejecting a genuine user as an unauthorized user [49].

(3). Equal error rate (EER): When the FA and FR are equal, the error is referred to as Equal Error (EE) and the probability at which FAR = FRR, is called the Equal Error Rate (EER).

(4) Receiver operating characteristics (ROC) curve: At various threshold values, the values of FAR and FRR can be plotted using a ROC curve. A ROC curve is used to show the performance of the proposed method.

(5) Genuine acceptance rate (GAR): The GAR is a measurement of the overall accuracy of a biometric system [50].

5.2.2. Comparison with Various Enhancement Algorithms

Table 3 shows the recognition accuracy in the form of the EER of various algorithms. The proposed method is compared with other popular enhancement techniques, including Histogram Equalization, used in [51] and [52], Histogram Stretching, in [51], simple Bi-linear Interpolation, Bi-cubic interpolation, and combined bilinear and un-sharp masking techniques, which are usually used for the enhancement and restoration of iris images. The results show that in all cases the proposed method outperformed all other popular enhancement algorithms. Compared to simple histogram equalization, our proposed method increased the accuracy by as much as 6.12%. We also used a combined technique in order to further compare the combination of two algorithms: bi-linear interpolation and un-sharp masking, but the accuracy of our enhancement algorithm is 5.4% more than that of the combined algorithm technique. The ROC curves of various enhancement algorithms and comparison with the proposed method are shown in Fig. 10 and Fig. 11, respectively.

Table 3.EER for iris recognition using different image enhancement algorithms.

Fig. 10.ROC curves of various image enhancement algorithms.

Fig. 11.ROC curves of various enhancement algorithms compared to the proposed algorithm Note: The small figure inside Fig.11 is the zooming of the rectangular region

5.2.3. Comparison with Other State-of-the-art Methods

The comparison of the proposed method with various state-of-the-art methods is presented in Table 4. The proposed method is first compared with the Park and Park method [47], in which 2048 bits of iris code and 2048 bits of mask code were extracted froman iris gray image of 256 × 8 pixels. The iris code is extracted by 1D Gabor filter and hamming distance was used for matching, but any other method for segmentation of iris boundaries and enhancement technique has not been used, therefore the result of EER was 25.7%, while the EER value of the proposed method after adding the iris localization and iris enhancement algorithms is 8.82%. Although in the proposed method we are using the UBIRIS dataset, which is a larger dataset in term of images than the CASIA.v1 database used in Park and Park [47], even then the EER value of the proposed method increased by 6.88%.

Table 4.Note. Data labeled & come from He et al. [54], * from Roy et al., # from Ying et al.. [39], and % from Sazonova et al. [55] and are converted into the required form to compare with the proposed method.

Similarly, the proposed method was compared with the methods of Ying et al. [39], Roy et al. [40], Zhang et al. [53] and He et al. [54], which use the CASIA.v3 dataset, which is acquired at close distance. Our proposed method outperformed these methods and achieved higher recognition accuracy by 3.18%, 10.18% , 18%, and 6%, respectively. Next, the proposed method was compared with Sazonova et al.’s [55] method, which uses the MBGC.v1 dataset. The proposed method presented 6.8% higher accuracy than this method. Furthermore, the proposed method was compared with the methods of Dong et al. [28] and Tan and Kumar [40], which use the same UBIRIS dataset. Again the proposed method showed higher recognition accuracy by 7.26% and 8.63%, respectively. The ROC curves of the proposed method compared with other methods are shown in Fig. 12.

Fig. 12.ROC curves of the proposed method compared with state-of-the-art methods Note: The small figure inside Fig. 12 is the zooming of the rectangular region

Finally, we measured the processing time for our proposed method on a desktop PC with a 3.40 GHz Intel Core™ i7 processor and 8 GB of RAM. The operation time includes the iris localization that includes accurate pupillary boundary, detect eyelids, and remove contact lens edges. Second, normalization that takes maximum processing time while converting iris features from polar to rectangular coordinates. Further, the enhancement algorithm that includes bilinear interpolation and CLAHE algorithm. Then computing the iris code extraction and matching steps. The processing time of all these steps are shown in Table 5. The processing time first for iris localization, by detecting the proper pupil and valid iris region was 47.5 ms. Obtaining the normalized iris image of 256 × 8 pixels took 224 ms. The processing time of our enhancement algorithm for bilinear and CLAHE was 5.2 ms. The processing time for iris code extraction and calculating the hamming distance (HD) was 31.9 ms. It is possible for the proposed method to be applied in a real-time system.

Table 5.Computational processing time

 

6. Conclusions

The ultimate purpose of our paper is to improve the recognition accuracy while dealing with non-ideal iris images. The images are captured either at-a-distance, on-the move, or with minimal user cooperation. To improve the recognition accuracy, we properly localized the iris region and then a novel algorithm for the iris image enhancement and recognition method. First, we used algorithm for iris image localization to obtain exact boundaries of the iris images, then applied a new enhancement algorithm to the localized iris images. The performance was evaluated using the NICE-II database, in which images have varying characteristics. The results our localization and enhancement algorithm were compared with those from other various enhancement algorithms. Our proposed method outperformed the previous algorithms. Furthermore, we compared our results with recent state-of-the-art methods regarding iris recognition. Our methodology presented better results than other methods in terms of accuracy as well as processing time. These results can be used to make a personalized service system in special environments, such as in a personalized interactive theater where the user gazes in a specific direction most of the time. In future, we are intending to proceed our work by localized the iris region more accurately in order to improve the recognition accuracy of our results.

참고문헌

  1. J. G. Daugman, “High Confidence Visual Recognition of Personals by a Test of Statistical Independence,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.15, no.11, pp. 1148-1160, 1993. Article (CrossRef Link) https://doi.org/10.1109/34.244676
  2. J. G. Daugman, "Demodulation by Complex-valued Wavelets for Stochastic Pattern Recognition,” International Journal of Wavelets, vol.1, no.1, pp.1-17, 2003. Article (CrossRef Link)
  3. J. G. Daugman, "How Iris Recognition Works," IEEE Trans. on Circuits and Systems for Video Technology, vol.14, no.1, pp.21-30, 2004. Article (CrossRef Link) https://doi.org/10.1109/TCSVT.2003.818350
  4. Independent Testing of Iris Recognition Technology Final Report, International Biometrics Group, 2005.
  5. R. P. Wildes, "Iris Recognition: An Emerging Biometric Technology," in Proc. of 1997 IEEE, vol.85, no.9, pp.1348-1363, 1997. Article (CrossRef Link)
  6. R. P. Wildes, "Iris Recognition," Biometric Systems, pp. 63-95, 2005. Article (CrossRef Link)
  7. Prabhakar, Salil, et al., "Introduction to the special issue on biometrics: Progress and directions," Pattern Analysis and Machine Intelligence, IEEE Transaction, pp. 513-516, 2007. Article (CrossRef Link) https://doi.org/10.1109/TPAMI.2007.1025
  8. R., Matey,O.Naroditsky, K.Hanna, R,Kolczynski,D. J. LoIacono, S.Mangru and W.Y.Zhao, "Iris on the move: Acquisition of images for iris recognition in less constrained environments," in Proc. of the IEEE, vol.94, no.11, 1936-1947, 2006. Article (CrossRef Link)
  9. Vatsa, Mayank, Richa Singh, Arun Ross, and Afzel Noore, "Quality-based fusion for multichannel iris recognition," in Proc. of 2010 IEEE 20th International Conference on Pattern Recognition, pp. 1314-1317, 2010. Article (CrossRef Link)
  10. X. Feng, C. Fang, X. Ding, and Y. Wu, "Iris localization with dual coarse to fine strategy," in Proc. of 2008 IEEE 18th International Conference on Pattern Recognition, pp. 553-556, 2008. Article (CrossRef Link)
  11. Z. He, T. Tan, and Z. Sun, "Iris localization via pulling and pushing," in Proc. of 2006 IEEE 18th International Conference on Pattern Recognition, vol. 4, pp. 366-369, 2006. Article (CrossRef Link)
  12. L. Ma, T. Tan, Y. Wang, and D. Zhang. “Personal identification based on iris texture analysis,” IEEE Tran. on Pattern Analysis and Machine Intelligence, vol.25, no.12, pp.1519–1533, 2003. Article (CrossRef Link) https://doi.org/10.1109/TPAMI.2003.1251145
  13. W. Kong and D. Zhang, “Detecting the eyelash and reflection for accurate iris segmentation,” International Journal of Pattern Recognition and Artificial Intelligence, pp. 1025–1034, 2003. Article (CrossRef Link) https://doi.org/10.1142/S0218001403002733
  14. B. Kang and K. Park. “A robust eyelash detection based on iris focus assessment,” Pattern Recognition Letters, vol.28, pp.1630–1639, 2007. Article (CrossRef Link) https://doi.org/10.1016/j.patrec.2007.04.004
  15. D. S. Jeong, J. W. Hwang, B. J. Kang, K. R. Park, C. S. Won, D. K. Park, and J. Kim, "A New Iris Segmentation Method for Non-ideal Iris Images," Image and Vision Computing, 2010. Article (CrossRef Link)
  16. P. J. Phillips, K.W. Bowyer, P. J. Flynn. “Comments on the CASIA version 1.0 iris dataset,” IEEE Trans. Pattern Anal. Mach. Intell, vol.29, no. 10, 2007. Article (CrossRef Link) https://doi.org/10.1109/TPAMI.2007.1137
  17. CASIA iris image database, Available from: . Article (CrossRef Link)
  18. H. Proenca and L.A. Alexandre, "UBIRIS: a noisy iris image database," in Proc. of the 13th International Conference on Image Analysis and Processing, vol. 1, pp. 970-977, 2005. Article (CrossRef Link)
  19. S.K. Kang, J.H. Min, J.K. Paik, "Segmentation based spatially adaptive motion blur removal and its application to surveillance systems," in Proc. of Int. Conf. Image Process. , vol. 1, pp. 245-248. 2001. Article (CrossRef Link)
  20. R. Malladi, J.A. Sethian, "Image processing via level set curvature flow," in Proc. of Nat. Acad. Sci., vol. 92, no.15, pp. 7046-7050, 1995. Article (CrossRef Link) https://doi.org/10.1073/pnas.92.15.7046
  21. D.G. Sheppard, K. Panchapakesan, A. Bilgin, B.R. Hunt, M.W. Marcellin, "Removal of image defocus and motion blur effects with a nonlinear interpolative vector quantizer," in Proc. of IEEE Southwest Symposium on Image Analysis and Interpretation, pp. 1-5, 1998. Article (CrossRef Link)
  22. R.C. Gonzalez, R.E. Woods, “Digital Image Processing (Second ed.),” Prentice Hall, 2002. Article (CrossRef Link)
  23. P.J.N. Kapur, A.K.C. Wong, “A new method for gray-level picture thresholding using the entropy of the histogram,” Comput. Vision Graphics Image Process, vol. 29, pp. 273–285, 1985. Article (CrossRef Link) https://doi.org/10.1016/0734-189X(85)90125-2
  24. L. Ma, T. Tan, Y. Wang, D. Zhang, “Efficient iris recognition by characterizing key local variations,” IEEE Trans. Image Process, vol. 13, no.6, pp. 739–750, 2004. Article (CrossRef Link) https://doi.org/10.1109/TIP.2004.827237
  25. R. Singh, M. Vatsa, A. Noore, “Improving verification accuracy by synthesis of locally enhanced biometric images and deformable model,” Signal Processing, vol.87, pp. 2746–2764, 2007. Article (CrossRef Link) https://doi.org/10.1016/j.sigpro.2007.05.009
  26. M. Vatsa, R. Singh, A. Noore, “SVM Based Adaptive Biometric Image Enhancement Using Quality Assessment Studies in Computational Intelligence,” Speech, Audio, Image and Biomedical Signal Processing using Neural Networks, vol. 83, Pp. 351-371, 2008. Article (CrossRef Link)
  27. K. P. Hollingsworth, K. W. Bowyer, and P. J. Flynn, “The best bits in n iris code,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 6, pp. 964–973, June 2009. Article (CrossRef Link) https://doi.org/10.1109/TPAMI.2008.185
  28. W. Dong, Z. Sun, and T. Tan, “Iris matching based on personalized weight map,” IEEE Trans. Pattern Anal. Mach. Intell, vol. 33, no. 9, pp. 1744–1757, September 2011. Article (CrossRef Link) https://doi.org/10.1109/TPAMI.2010.227
  29. D. M. Monro, S. Rakshit, and D. Zhang, “DCT-based iris recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 4, pp. 586–595, April 2007. Article (CrossRef Link) https://doi.org/10.1109/TPAMI.2007.1002
  30. K. Miyazawa, K. Ito, T. Aoki, K. Kobayashi, and H. Nakajima, “An effective approach for iris recognition using phase-based image matching,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 10, pp. 1741–1756, October 2007. Article (CrossRef Link) https://doi.org/10.1109/TPAMI.2007.70833
  31. H. Proenca, “Iris recognition: On the segmentation of degraded images acquired in the visible wavelength,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 8, pp. 1502–1516, Aug. 2010. Article (CrossRef Link) https://doi.org/10.1109/TPAMI.2009.140
  32. R. D. Labati and F. Scotti, “Noisy iris segmentation with boundary regularization and reflections removal,” Image Vis. Comput., vol. 28, no. 2, pp. 270–277, February 2010. Article (CrossRef Link) https://doi.org/10.1016/j.imavis.2009.05.004
  33. NICE: II: Noisy Iris Challenge Evaluation, Part II. [Online]. Available: http://nice2.di.ubi.pt/, 2012. Article (CrossRef Link)
  34. L. Ma et al., “Personal identification based on iris texture analysis,” IEEE Trans. Pattern Anal. Mach. Intell, vol. 25, no.12, pp.1519–1533, 2003. Article (CrossRef Link) https://doi.org/10.1109/TPAMI.2003.1251145
  35. D. S. Jeong et al., “A new iris segmentation method for non-ideal iris images,” Image Vis. Comput, vol. 28, no. 2, pp.254–260, 2010. Article (CrossRef Link) https://doi.org/10.1016/j.imavis.2009.04.001
  36. S. Arora, N. D. Londhe, and A. K. Acharya, “Human identification based on iris recognition for distant images,” Int. J. Comput. Appl, vol.45, no.16, pp.32–39, 2012. Article (CrossRef Link)
  37. S. Ziauddin and M. N. Dailey, "Iris recognition performance enhancement using weighted majority voting," in Proc. of 15th IEEE Int. Conf. on Image Process, California , pp. 277-280, 2008. Article (CrossRef Link)
  38. C.W.Tan and A. Kumar, “Towards online iris and periocular recognition under relaxed imaging constraints,” IEEE Trans. Image Process., vol. 22, no. 10, pp. 3751–3765, Oct. 2013. Article (CrossRef Link) https://doi.org/10.1109/TIP.2013.2260165
  39. C. Ying, et al., “Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion,” The Scientific World Journal, 2014. Article (CrossRef Link)
  40. Roy, Kaushik, Prabir Bhattacharya, and Ching Y. Suen. "Iris recognition using shape-guided approach and game theory," Pattern Analysis and Applications, vol.14, no.4, pp.329-348, 2011. Article (CrossRef Link) https://doi.org/10.1007/s10044-011-0229-7
  41. C.W. Tan, & A. Kumar, “Accurate Iris Recognition at a Distance Using Stabilized Iris Encoding and Zernike Moments Phase Features,” Image Processing, IEEE Transactions on, vol.23, no.9, pp.3962-3974, 2014. Article (CrossRef Link) https://doi.org/10.1109/TIP.2014.2337714
  42. D. H. Cho, K. R. Park, D. W. Rhee, Y. G. Kim, and J. H. Yang, "Pupil and Iris Localization for Iris Recognition in Mobile Phones," in Proc. of Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, 2006. Article (CrossRef Link)
  43. H.C. Park, J.J. Lee, Park, S.K. Oh, Y.C. Song, D.H. Choi, K.H. Park, "Iris feature extraction and matching based on multi-scale and directional image presentation," Lecture Notes in Computer Science, pp.576-583, 2003. Article (CrossRef Link)
  44. Kang, Byung Jun,et al., "Fuzzy difference-of-Gaussian–based iris recognition method for noisy iris images," Optical Engineerin, vol.49, no.6, pp.067001-067001, 2010. Article (CrossRef Link) https://doi.org/10.1117/1.3447924
  45. Zuiderveld, K, “Contrast Limited Adaptive Histogram Equalization,” Graphics Gems IV, pp. 474–485, 1994. Article (CrossRef Link)
  46. Y. K. Jang, B. J. Kang, and K. R. Park, “A Study on Eyelid Localization Considering Image Focus for Iris Recognition,” Pattern Recognition Letters, vol.29, no.11, pp.1698-1704, 2008. Article (CrossRef Link) https://doi.org/10.1016/j.patrec.2008.05.001
  47. H.A. Park, K.R, Park, “Iris recognition based on score level fusion by using SVM,” Pattern Recognition Letters, vol.28, no.15, pp. 2019–2028, 2007. Article (CrossRef Link) https://doi.org/10.1016/j.patrec.2007.05.017
  48. M. Vatsa, R. Singh, A. Noore, “SVM Based Adaptive Biometric Image Enhancement Using Quality Assessment Studies in Computational Intelligence,” Speech, Audio, Image and Biomedical Signal Processing using Neural Networks, vol. 83, pp. 351-371, 2008. Article (CrossRef Link)
  49. N.K.Ratha,V. Govindaraju, eds, “Advances in biometrics: sensors, algorithms and systems,” Springer Science & Business Media, 2007. Article (CrossRef Link)
  50. Andrade, Christopher, and H. V.S. Sebastian, “Investigating and comparing multimodal biometric techniques,” Springer US, 2008. Article (CrossRef Link)
  51. G. Kaur, A. Girdhar, and M. Kaur, "Enhanced Iris Recognition System–an Integrated Approach to Person Identification," International Journal of Computer Applications, pp.1-5, 2010. Article (CrossRef Link)
  52. Y. Zhu, T. Tieniu, Y.Wang, "Biometric personal identification based on iris Patterns," in Proc. of 2000 IEEE International Conference on Pattern Recognition, vol.2, pp.801-804, 2000. Article (CrossRef Link)
  53. M. Zhang, Z. Sun and T. Tan, “Perturbation-enhanced feature correlation filter for robust iris recognition,” Biometrics, IET, vol.1, no.1, pp.37-45,2012. Article (CrossRef Link) https://doi.org/10.1049/iet-bmt.2012.0002
  54. Z. He, T. Tan, Z. Sun and X. Qiu, “Toward accurate and fast iris segmentation for iris biometrics,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.31, no.9, pp.1670-1684, 2009. Article (CrossRef Link) https://doi.org/10.1109/TPAMI.2008.183
  55. N. Sazonova and S. Schuckers, "Fast and efficient iris image enhancement using logarithmic image processing," SPIE Defense, Security, and Sensing, International Society for Optics and Photonics, pp.76670K-76670K, 2010. Article (CrossRef Link)

피인용 문헌

  1. Image Denoising Based on Adaptive Fractional Order Anisotropic Diffusion vol.11, pp.1, 2016, https://doi.org/10.3837/tiis.2017.01.023
  2. Choosing the filter for catenary image enhancement method based on the non-subsampled contourlet transform vol.88, pp.5, 2017, https://doi.org/10.1063/1.4983375