1. Introduction
Finger vein recognition system, which is convenient, non-invasive and high-security [1], has attracted considerable attentions in the past decade. Finger veins are subcutaneous structures that randomly develop into a network and spread along a finger. The physiological property of the veins makes them highly secure against counterfeiting and less affected by factors from the outer skin (such as skin disease, humidity, dirtiness) [2]. Furthermore, compared with the traditional biometrics (face, fingerprint, iris, gait, retina, palmprint, etc), finger vein recognition technology has benefits of small imaging device, low cost, easy collection of images with contactless operation, universality, and liveness [3,4]. Hence, finger vein recognition is considered to be one of the most promising solutions for personal identification in the future [5].
Novel feature representation is an important branch of the research on finger vein identification. To a large extent, the effectiveness of a feature representation method determines the performance of a finger vein identification system. Vein-pattern-based feature extraction methods are very popular and have developed a lot in the recent years [6-9]. For example, Miura et al. proposed two methods for finger vein pattern extraction. One uses repeated line tracking [6] and the other one is based on the local maximum curvature [7]. To reduce the dependence on parameters in the previous methods, Song et al. presented a finger vein pattern extraction method using mean curvature. It considers the vein image as a geometric shape and then locates the valley-like structures with negative mean curvatures [8]. Xie et al. introduced a finger vein pattern extraction method based on Guided Gabor filter [9]. These methods have good performance under the assumption that the blood vessel networks are segmented properly. However, segmentation errors may occur due to the low quality of finger vein images caused by uneven illumination [10], and skin scattering problems [11]. Moreover, the accuracy of vein segmentation is also easily affected by the influences from image translation, rotation, and scale. Hence, recognition performance using these methods will be degraded when the vein networks are not segmented properly.
To alleviate the difficulty of segmentation, some investigations have been focused on extracting the local features in finger vein images. In one approach, local binary pattern is extracted for finger recognition [12,13]. Inspired by the better performance using the local line binary pattern compared to LBP in face recognition [14], Bakhtiar et al. employed LLBP for finger vein recognition [15]. The main difference between LBP and LLBP is that LLBP applies the neighborhood shape with straight lines, rather than the square or circle shapes in LBP. The performance using LLBP is better than with LBP, since the veins are located inside the finger in the piecewise-linear style [15].
However, LLBP is limited in that it extracts only horizontal and vertical line patterns. Since the veins are randomly developed inside a finger with various orientations, we believe that the line patterns extracted from horizontal and vertical orientations may not contain the most discriminative information[16]. In addition, finger vein images have rich orientation information as those in fingerprint images [17]. Therefore, to extract much more effective orientation features from the finger vein images and to better utilize them, in this paper, we propose an orientation-selectable feature representation method called generalized local line binary pattern (GLLBP). Fig. 1 shows the block diagram of the proposed method. A robust finger vein region of interest (ROI) localization method based on flexible segmentation is first employed for ROI localization [18]. Since the length of the line pattern is much greater than the diameter in LBP, GLLBP labels each point in an image by thresholding the points in a line with the average gray value of the points in the line, except for the center pixel value. In addition, GLLBP has good orientation selectivity, which can be used to exploit line patterns from arbitrary orientations. In this case, the line pattern with the most discriminative ability can be investigated for matching. As shown in Fig. 1, features fθ1 to fθn denote the GLLBP features extracted from different orientations. To further enhance the matching accuracy and generalization ability, the soft power metric is employed for similarity measure of two GLLBP histograms. Furthermore, in order to sufficiently utilize the effective information in a finger vein image, matching scores with the most discriminative ability are fused using the Hamacher rule to increase the discriminative ability. Experimental results on our established database, MMCBNU_6000, have demonstrated that the proposed method exhibits excellent performance, compared with state-of-the-art methods that use the oriented and local features.
Fig. 1.Block diagram illustration of the proposed method
The rest of this paper is organized as follows: Section 2 reports the proposed GLLBP for finger vein feature extraction in detail. Matching score-level fusion with the selection of effective features is described in Section 3. Section 4 introduces the finger vein dataset and provides extensive experimental results to verify the superiority of the proposed method. The conclusion of this paper and ideas for further exploration are summarized in Section 5.
2. Feature Extraction Using GLLBP
In this section, we first review the technology of LLBP. Then, the proposed generalized local line binary pattern (GLLBP) is reported, followed by a discriminative analysis of GLLBP line patterns in different orientations.
2.1. Local Line Binary Pattern
LBP is an effective texture representation method which has shown excellent performance in many comparative studies, in terms of both speed and discrimination performance [19,20]. LBP codes have been extracted for finger vein recognition [12,13]. In consideration of the case that finger veins are located inside a finger in the line style, Bakhtiar et al. employed LLBP for finger vein recognition in [15].
Fig. 2.LLBP operator
Unlike the square shapes used in LBP, the shapes in the LLBP operator are straight lines, which are shown in Fig. 2. LLBP consists of horizontal component and vertical component. For each component, LLBP labels each pixel of the image by thresholding the points in the line with the gray value of the center pixel. The magnitude of LLBP can be obtained by calculating the line binary codes from both components. An illustration of the LLBP operator is depicted in Fig. 2, and its definitions are given in Equations (1)-(5):
where LLBPh,N,c and LLBPv,N,c are LLBP values from the horizontal and vertical orientations, respectively. LLBPm,N,c represents the corresponding magnitude. gh,i and ,gv,i denote the gray values of pixels along with the horizontal and vertical lines, respectively. N is the length of the line in pixels and c=(N+1)/2 is the position of the center pixel. Hence, gh,c and gv,c are the center pixel values in the horizontal and vertical line patterns, respectively. Function s(·) is a thresholding function as shown in Equations (4) and (5).
Using Equations (1) and (4), the horizontal LLBP component (LLBPh,N,c) extracts a binary code with N−1 bits for each pixel in ROI. The same number of bits are extracted by the vertical LLBP component (LLBPv,N,c) using Equations (2) and (5). Consequently, by concatenating the binary codes from LLBPh,N,c and LLBPv,N,c, the total binary code of LLBP for each pixel is 2(N−1) bits. At the same time, the decimal value of each pixel can be produced by converting the binary codes.
2.2. Proposed GLLBP
As shown in Fig. 2, the length of the line pattern is much larger than the diameter in LBP, which depicts that there may be big differences between the center pixel value and other pixel values in the line. We believe that using the center pixel of the line for thresholding may not be the optimal method to describe the micro-line pattern for that pixel, especially for the line located near the edge and corner in a finger vein image. Hence, in GLLBP, to better represent the micro-line pattern for each pixel in an image, the average gray value of the points in the line is employed to replace the gray value of the center pixels for thresholding the points in the line.
In addition, the LLBP operator considers only the horizontal and vertical orientations in an image, and in the square window (Fig. 2), only the pixels in the red rectangles are used for computing LLBP values. However, since some patterns such as finger vein patterns consist of piecewise-linear segments with different orientations and lengths, it is better to use features extracted by multiple orientation-based operators for improving the matching results.
Inspired by the good power of Gabor filter [2,9] in capturing specific texture characteristics from any orientation of an image, the proposed GLLBP extends LLBP for line pattern extraction into arbitrary orientation. Hence, GLLBP has superior orientation selectivity over LLBP. The circular templates, as used in LBP, are adopted in GLLBP to compute the GLLBP values. The line pattern from arbitrary orientation can be extracted using the following equations:
where θ is the orientation, K is the number of considered orientations, and gθ,i are the gray values of points (represented in brown color in Fig. 3) in the line from orientation θ except for the center point. gθ,average denotes the average gray value of all the points in that line with orientation θ. For the pixels in the lines with horizontal and vertical orientations shown in Figs. 3(a) and 3(d), their pixel values are directly used to compute GLLBP values. The values of points (represented in brown color in Figs. 3(b), 3(c), 3(e) and 3(f)) in the line patterns from other orientations are first computed from their surrounding pixel values using bilinear interpolation. Then, the Equations (6) and (7) are applied to compute the GLLBP values of orientation θ.
Fig. 3.Circular templates used for computing GLLBP line patterns from 6 orientations
Fig. 3 depicts an example of circular templates employed to compute the GLLBP line patterns from 6 orientations. For each template, the black point in the center position denoted as gθ,average is the average gray value of all points in the line. The gθ,i points in brown will be compared with the gθ,average for computing the GLLBP value.
Table 1 shows GLLBP values of line patterns according to six different orientations for the same sub-block as the one used for showing how to obtain LLBP values in Fig. 2. Although they are displayed as integers, their real values are used for the computation. All the gray values of center points labeled in bold are gθ,average.
Table 1.Line patterns from different orientations and their GLLBP values (K = 6, N = 15)
To extract the line patterns with most discrimination ability and sufficiently utilize the information in a finger vein image, the discrimination abilities of line patterns from different orientations are evaluated. Fig. 4 depicts the localized ROIs and their corresponding GLLBP images from 6 orientations. Since N used to compute the GLLBP line pattern is 15, the gray values of GLLBP images are in the range of 0-255. The GLLBP images have the size of 46×114. The white curves indicate the vein patterns in the ROIs. It is ascertained from Fig. 4 that the GLLBP image in different orientation shows different output, which implies that the discrimination ability of the component varies with orientation. Furthermore, compared with GLLBP images at 30°, 60°, 90°, 120° and 150°, GLLBP images from 0° have fewer veins.
Fig. 4.ROI images and their corresponding GLLBP images with different orientations when N=15 : (a) ROI images, (b)-(g) their GLLBP images in different orientations, θ= 0°,30°,60°,90°,120°,150°, respectively.
2.3. Feature Extraction Using GLLBP
There are two methods which are commonly used for feature representation with LBP and its variations. One is to append the binary codes of each point consecutively to represent an image. The other is to use a histogram to represent the micro patterns in the LBP or LLBP image. The LBP or LLBP image is usually divided into non-overlapping sub-images to further investigate the local features from different parts of an image, followed by the concatenation of histograms from each sub-image. The first usage always has good performance under the assumption that ROIs are accurately segmented from the acquired image. However, segmented errors may occur due to the low qualities of finger vein images caused by uneven illumination [10] and skin scattering problems [11]. In addition, a dimensionality disaster always accompanies the first method, especially when the number of surrounding points is large. Therefore, the second usage that extracts the histograms from a GLLBP image is employed in this paper.
The feature size of each GLLBP histogram from one orientation is
where θ describes an orientation. N denotes the length of a line pattern. m×n indicates that a GLLBP image is divided into m×n non-overlapping sub-images. The dimensionality of GLLBP histogram relies on the length of line pattern and the number of sub-images.
3. Matching
3.1. Matching Metric
There are various metrics to evaluate the similarity of two histograms, such as histogram intersection, log-likelihood ration and chi-square. In this paper, an effective metric called soft power [20] is employed to compute the similarity of two histograms. For the soft power metric, an original histogram feature space is transformed into a new feature space, based on the Euclidean distance metric. The Euclidean metric in this space has the same characteristic as the chi-square metric in the histogram feature space, in that greater magnitude of the histogram corresponds to less dissimilarity of the patterns [20]. The soft power metric is defined as follows:
where X and Y are two histograms with n bins.
As shown in Equation (10), the soft power metric is also a generalized Euclidean metric, since it is the Euclidean metric if k=1. The optimal value of k can be investigated using the following equation:
Table 2.Comparison of matching accuracy using 10-folds cross validation (standard variation)
Table 3.Comparison of matching accuracy using 10-folds cross validation (standard variation)
Table 4.Comparison of matching accuracy using 10-folds cross validation (standard variation)
Table 5.Comparison of EER after fusing the matching scores with different rules
Table 6.EERs of different methods
3.2. Selection of Effective Features for Matching
The more features are used, the better final results are. However, it takes 130.7 ms to extract one GLLBP component which is used as a feature in this paper and to involve it in matching process (refer to Table 7). Therefore, taking the matching accuracy, memory space, and the system processing time into consideration, in the proposed method, only some competive GLLBP components are used for finger vein recognition.
Table 7.Comparison of average processing time and feature dimensionality
As shown in Fig. 5, since the proposed method uses the matching score fusion method to get the better final result, its performance depends on how many and what kind of GLLBP components are really utilized. To select effective features for matching, we test the matching accuracy of the GLLBP line patterns from different orientations, respectively. As shown in Table 2, ‘center point value’ indicates that the gray value of the center point is used as gθ,c to calculate the GLLBP component, while ‘average value’ denotes that the average gray value of the points is used as gθ,average. Each GLLBP image is divided into 3×3 non-overlapping sub-images to generate the final feature. The nearest neighbor classifier is applied for classification. First, it clearly demonstrates that the method using average gray value for thresholding has better matching accuracy than using a center point value in LLBP [15]. Furthermore, the generalization ability is also enhanced by using the proposed method, which has smaller standard variation values demonstrated in 10-folds cross validation. The GLLBP components at 30°, 60°, 90°, 120° and 150° also have better performance than that at 0°, which is in accord with the vein richness of GLLBP images in Fig. 4. Additionally, Table 2 shows that the GLLBP components at 60°, 120°, and 150° have better performance than those at 0°, 30° and 90°, which verifies that the GLLBP components with the best discriminative ability are not at 0° and 90° (which are used in LLBP [15]).
Fig. 5.Block diagram of GLLBP-based method proposed for finger vein recognition: (a) ROI image, (b) three circular templates for computing GLLBP line patterns at 60°,120°,150°, (c) corresponding GLLBP images, (d) orientation-based matching scores achieved using the soft power metric, and (e) fused matching score by the Hamacher rule.
Based on the experimental analyses, as shown in Fig. 5, only the GLLBP line patterns generated only at 60°, 120°, and 150° for each ROI image are selected to participate in the proposed finger vein recognition process.
3.3. Matching Score-Level Fusion
As mentioned, the effective information is not fully utilized in LLBP. To solve this problem, the matching scores from GLLBP components with the most discriminative ability are fused to generate the last matching score. The GLLBP components at 60°, 120°, and 150° have shown the best matching accuracy. Hence, as shown in Fig. 5, the matching scores from these three orientations are fused to achieve the last matching score.
The sum rule, max rule, and Hamacher rule are frequently used fusion rules for computing the final matching score. Suppose Sθ denotes the corresponding matching score of an orientation θ computed using the soft power metric. The fused matching score can be calculated with different rules, such as:
In this paper, the Hamacher rule is employed to fuse the matching scores at 60°, 120°, and 150°. Their fused score sf(60,120,150) can be calculated as follows:
4. Experimental Results
In our previous work [21,22], a finger vein image database named MMCBNU_6000 was established using a lab-made imaging device shown in Fig. 6. In this section, we present our experiments performed on our established finger vein dataset, MMCBNU_6000 [21,22]. All the experiments are performed using MATLAB (R2010a) on a computer with an Intel Core i3-2120 processor and 4 GB of RAM.
Fig. 6.Lab-made imaging device: (a) The lab-made portable finger vein imaging device, and (b) an example of image collection for the left forefinger
4.1. Lab-Made Finger Vein Imaging Device and Dataset
As shown in Fig. 6(a), the lab-made finger vein imaging device is composed of a camera with an infrared light passing filter and an array of infrared LEDs of 850-nm wavelength. For the sake of improving the convenience for image collection, a holder is added on the back of the device, and a hole is punched that is larger than the thickness of the top of an adult’s finger. In addition, a USB interface is applied for the power supply to allow for portability. As shown in Fig. 6(b), the size of our device is 6.8×5.4×10.1 cm (length×width×height: cm), which is the same height as a small can of coffee (175 ml).
The MMCBNU_6000 database consists of finger vein images captured from 100 volunteers, from 20 different countries in Asia, Europe, Africa, and America. The ages of volunteers ranged from 16 to 72 years. Each subject was asked in the capturing process to provide images from his or her index finger, middle finger, and ring finger of both hands in a standard office environment. The collection for each of the 6 fingers is repeated 10 times to obtain 10 finger vein images. Hence, our finger vein database is composed of 6,000 images. Each image is stored in bmp format with 480×640 pixels size. The localized ROI image has a pixel size of 64×128.
4.2. Investigation of Optimal Parameters
The matching accuracy of the finger vein recognition system using the proposed method is dependent on four factors: the length (N) of the line pattern, the partitioning method for GLLBP image, the orientation of GLLBP component, and k in the soft power metric. Since the discriminative ability of GLLBP components in different orientations has been evaluated in section 3.2, in this section, three experiments are designed to explore the optimal parameters. 10-folds cross validation with the nearest neighbor classifier is employed for evaluating the recognition performance with varying parameters.
4.2.1. Searching the Optimal N
The first experiment is devised to explore the optimum length (N) of the line pattern. To do this, we calculate the matching accuracy of 10-folds cross validation from the vertical and horizontal GLLBP components. In this experiment, GLLBP images are divided into 2×2 non-overlapping sub-images. The Euclidean distance metric is employed for similarity measuring. As shown in Table 3, it is ascertained that the matching accuracy of the proposed GLLBP is enhanced with increasing N. In addition, the standard variation is reduced, which illustrates the enhanced generalization ability.
In terms of processing time, the matching time keeps increasing with increasing N, which is shown in Table 4. This is attributed to increasing feature size. Furthermore, the feature extraction time for the components at 30°, 60°, 120°, and 150° is longer than those used for computing components at 0° and 90°, due to the operation of bilinear interpolation for computing the pixel value. As listed in Table 4, since the size of a GLLBP image is reduced with enhanced N, the feature extraction time is decreased. Taking the matching accuracy, processing time and feature size into consideration, N is set to 15 in later experiments.
4.2.2. Searching the Optimal Partitioning Style
The presented experiment in this part is done to investigate the optimal partitioning style. Since the feature size is heavily dependent on the number of sub-images, in this experiment, the number of sub-images is selected to be less than 10. Hence, to investigate the contribution of partitioning style on the recognition performance, five modes of partitioning are compared: 1×1(non-partitioning), 2×2, 3×3, 2×4, and 4×2.
Fig. 7 depicts the matching accuracy of 10-folds cross validation with different partitioning styles. It is clearly shown in Fig. 7 that the matching accuracy is continually enhanced with increasing the number of sub-images. In addition, the extent of the increase extent is reduced with increasing number of sub-images. Hence, after careful consideration of the matching accuracy and feature dimensionality, the partitioning style is selected as 3×3.
Fig. 7.Matching accuracy of 10-folds cross validation with varying partitioning style
4.2.3. Searching the Optimal k
Since the matching accuracy of the proposed method also varies with the parameter k in the soft power metric, an experiment is done to investigate the optimal k. Figs. 8(a) and 8(b) show the matching accuracy and the corresponding standard variances of GLLBP components at 30°, 60°, 90°, 120°, and 150° with the increasing value of k.
Fig. 8.Matching accuracy and standard variances of 10-folds cross validation for GLLBP histograms with θ=30°,60°,90°,120°,150° and varying k: (a) matching accuracy, and (b) standard variances.
It is clearly demonstrated in Figs. 8(a) and 8(b) that the matching accuracy of GLLBP components from these five orientations are enhanced with increasing value of k from 0.05 to 0.55, while the corresponding standard variances keep decreasing in the same variation region of k. The matching accuracies of 10-folds cross validation for GLLBP components (k=0.55) at 30°, 60°, 90°, 120°, and 150° are 99.70% (0.0038), 99.75% (0.0034), 99.56% (0.0058), 99.75% (0.0029), and 99.75% (0.0035), respectively. This verifies that the soft power metric can improve the matching accuracy and generalization ability. With further increasing k from 0.55, the matching accuracies of the GLLBP histogram at these five orientations are decreasing, with enhanced standard variances. Also, the discriminative ability of GLLBP components using the soft power metric at each orientation is accord with the experimental results without using soft power metric shown in section 3.2.
By means of the using soft power metric, the matching accuracy is enhanced with an optimal parameter k. Meanwhile, the generalization ability of the system is also enhanced, which is verified by the decreasing value of the standard variance, as shown in Fig. 8(b). Inspired by this, we test whether the soft power metric employed in other partitioning styles can substantially improve the matching accuracy.
Figs. 9, Fig. 10, and 11 depict the matching accuracy of GLLBP components at 60°, 120°, and 150°, with two kinds of partitioning styles. The corresponding standard variances with varying k are also displayed in these figures. It is illustrated that by using the soft power metric, the matching accuracy gaps between using the 2×2 partitioning style and 3×3 partitioning style are reduced. Particularly, the differences are smallest when k is 0.6. The smallest differences of the standard variance also occur in this case. When k is 0.6, the performance gaps between dividing GLLBP image into 2×2 sub-images and 3×3 sub-images are 0.12%, 0.37%, and 0.35% for the GLLBP histograms generated at60°, 120°, 150°, respectively. Therefore, the partitioning style is finally selected as 2×2, in consideration of the matching accuracy, memory space, and processing time. k is finally selected as 0.6.
Fig.9.Matching accuracy of 10-folds cross validation for GLLBP histogram generated using different partitioning styles and varying k, θ=60°
Fig.10.Matching accuracy of 10-folds cross validation for GLLBP histogram generated using different partitioning styles and varying k, θ=120°
Fig.11.Matching accuracy of 10-folds cross validation for GLLBP histogram generated using different partitioning styles and varying k, θ=150°
4.3. Matching Accuracy
In order to sufficiently utilize the discriminative information in a finger vein image, in this section, we compare the matching accuracy obtained by fusing the matching scores at different orientations using different rules. In order to evaluate the performance enhancement after the matching scores fusion, the equal error rate (EER), which is the value where the false accept rate (FAR) is equal to the false reject rate (FRR) is adopted to evaluate the matching accuracies. To calculate EER, five finger vein images from one individual are selected as the training set, while the other five images are used as the test set. Hence, the training database and testing database are both composed of 3,000 images. Each finger is considered as an individual. The number of genuine matches is 3,000, and the number of imposter matches is 1,797,000 .
Table 5 shows the comparison of EER after fusing the matching scores with different rules. The EERs for GLLBP components at 60°, 120°, and 150° are 1.20%, 1.36, and 1.40%, respectively, when k is 0.6. It is clearly shown that no matter which kind of fusion rule is employed, the value of EER is reduced after fusing the matching scores. This verifies that the matching scores at different orientations are compatible and complementary. Furthermore, compared with other fusion rules, the EERs computed using the Hamacher rule are the smallest. Additionally, the more features are used, the better the final results are. However, although the matching performance using fused matching score sf(30,60,120,150) is better than that of using sf(60,120,150), it not cost-efficient since the small enhancement of matching accuracy also increases the processing time of 122.3 ms and feature dimensionality of 1024.
4.4. Comparison in Matching Accuracy with Other Methods
In order to evaluate the effectiveness of the proposed method, similar methods using oriented features and local features such as LBP [12], LLBP [15], local directional code (LDC) [23], Gabor filter [9], and steerable filter [24] are implemented for comparison. ERR is employed to evaluate the matching performance. For fair comparison, all the algorithms are performed on ROIs from MMCBNU_6000 without any post-processing like image denoising or image enhancement.
As shown in Fig. 12, LDC-00 and LDC-45 represent the local directional codes extracted at 0 and 45 degrees. LLBP-15 devotes the LLBP codes at 0° and 90° are combined together when the length of line pattern is 15. In order to reduce processing time and storage space, size of ROI is reduced to its half size to generate LLBP codes. Same as the operation in [12] to get LBP codes, size of ROI is reduced to its one third to reduce the processing time and storage space. For these three methods, original distance metrics employed in [12,15,23] are applied for fair comparison. Partition style to generate features used in the Gabor filter [9] and steerable filter [24] are the same as those in the references. For these two methods, the Euclidean distance is employed for matching.
Fig.12.ROC curves using different methods
Compared with the proposed method, LDC-00 and LDC-45 only extract the local directional codes from an image, while the LLBP codes extracted at 0 and 90 degrees cannot exploit the most discriminative features. In addition, the finger veins are randomly developed inside the finger in the line style, so the line pattern can better represent the oriented features in a finger vein image. Hence, the performance of LBP is poorer than that of the proposed method. Although the Gabor filter and steerable filter can exploit the local features in 8 orientations, the discriminative ability is also poorer than those of the proposed method. Therefore, as the receiver operating characteristics (ROC) curves shown in Fig.12 and EER values listed in Table 6, the proposed method outperforms the other methods, which is demonstrated by the lowest EER of 0.61%.
4.5. Comparison in Average Processing Time with Other Methods
In this part, we compare the average processing time of all the algorithms compared in Fig. 12. All the algorithms are computed for 6000 images in MMCBNU_6000. The feature extraction time includes the processing time for saving the features. As shown in Table 7, code-based methods have larger feature dimensionality than other methods. With larger size of feature for storage, their processing time is also longer than other methods. Since the distance metrics applied in Gabor filter [9], steerable filter [24] and the proposed method are all Euclidean distance or its variation thereof, the matching time is only dependent on the feature dimensionality. Hence, the steerable filter has the smallest matching time due to its smallest feature size. The total processing time of the proposed method is 392.1 ms, including the feature extraction time and matching time for the GLLBP components at three orientations. However, the process of feature extraction and matching for each orientation can be finished with parallel calculation. Hence, with the parallel calculation, the processing time of the proposed method is 130.7 ms for a finger vein image. Note that the program code is actually not optimized, so it is possible to further reduce the processing time.
5. Conclusion
This paper presented a finger vein recognition method named generalized local line binary pattern (GLLBP). GLLBP can exploit the line pattern in any orientation, so the discriminative ability of a GLLBP component from any orientation can be analyzed for further investigation. It was proved that the GLLBP components with the most discriminative ability are at 60, 120, and 150 degrees. Since the length of the line pattern in GLLBP is much larger than the radius in LBP, the gray value of the center pixel in the line was replaced by the average gray value of all the pixels in the line. The soft power metric was verified to provide superior performance in terms of matching accuracy and generalization ability. In addition, the matching-score fusion that fuses the scores at the 60, 120, and 150 degrees can effectively enhance the matching performance. Experimental results, performed on our collected MMCBNU_6000 database, showed the smallest EER of 0.61%, which had illustrated that the proposed method outperforms the existing methods such as LBP, LLBP, LDC, Gabor filter, and steerable filter.
References
- W.C. Yang, J.K. Hu, and S. Wang, "A finger-vein based cancellable bio-cryptosystem," Network and System Security, pp. 784-790, Spring Berlin Heidelberg, 2013.
- S.J. Xie, S. Yoon, J.C. Yang, Y. Lu, D.S. Park, and B. Zhou, "Feature component-based extreme learning machines for finger vein recognition," Cognitive Computation, 2014.
- J.C. Hashimoto, "Finger vein authentication technology and its future," in Proc. of Symposium on VLSI Circuits Digest of Technical Paper, pp. 5-8, June, 2006.
- T. Yanagawa, S. Aoki, and T. Ohyama, "Human finger vein images are diverse and its patterns are useful for personal identification," in Proc. of Kyushu University MHF Preprint Series, pp. 1-8, April, 2007.
- J.F. Yang, and Y.H. Shi, "Finger-vein ROI localization and vein ridge enhancement," Pattern Recognition Letters, vol.33, no.12, pp. 1569-1579, September, 2012. https://doi.org/10.1016/j.patrec.2012.04.018
- N. Miura, A. Nagasaka, and T. Miyatake, "Feature extraction of finger-vein patterns based on repeated line tracking and its application to personal identification," Machine Vision Applications, vol.15, pp. 194-203, July, 2004. https://doi.org/10.1007/s00138-004-0149-2
- N. Miura, A. Nagasaka, and T. Miyatake, "Extraction of finger-vein patterns using maximum curvature points in image profiles," IEICE Transactions on Information and Systems, vol.90, no.8, pp. 1185-1194, August, 2007.
- W. Song, T. Kim, H.C. Kim, J.H. Choi, H-J. Kong, and S-R. Lee, "A finger-vein verification system using mean curvature," Pattern Recognition Letters, vol. 32, no. 11, pp. 1541-1547, August, 2011. https://doi.org/10.1016/j.patrec.2011.04.021
- S.J. Xie, J.C. Yang, S. Yoon, Y. Lu, and D.S. Park, "Guided Gabor filter for finger vein pattern extraction," in Proc. of 8th International Conference on Signal Image Technology and Internet Based Systems, pp. 118-123, November 25-29, 2013.
- H-G. Kim, E.J. Lee, G-J. Yoon, S-D. Yang, E.C. Lee, and S.M. Yoon, "Illumination normalization for SIFT based finger vein authentication," Advances in Visual Computing, pp. 21-30, 2012.
- J.F. Yang, B. Zhang, and Y.H. Shi, "Scatting removal for finger-vein image restoration," Sensors, vol. 12, no.3, pp. 3627-3640, March, 2012. https://doi.org/10.3390/s120303627
- E.C. Lee, H.C. Lee, and K.R. Park, "Finger vein recognition using minutia-based alignment and local binary pattern-based feature extraction," International Journal of Imaging Systems and Technology, vol. 19, no.3, pp.179-186, September, 2009. https://doi.org/10.1002/ima.20193
- E.C. Lee, H. Jung, and D. Kim, "New finger biometric method using near infrared imaging," Sensors, vol. 11, no.3, pp. 2319-2333, February, 2011. https://doi.org/10.3390/s110302319
- A. Petpon, and S. Srisuk, "Face recognition with local line binary pattern," in Proc. of 5th International Conference on Image and Graphics, pp. 533-539, September 20-23, 2009.
- A.R. Bakhtiar, W.S Chai, and A.S. Shahrel, "Finger vein recognition using local line binary pattern," Sensors, vol.11, no.12, pp. 11357-11371, November, 2012.
- Y. Lu, S. Yoon, S.J. Xie, and D.S. Park, "Finger vein identification using polydirectional local line binary pattern," in Proc. of International Conference on ICT Convergence, pp. 61-65, Jeju Island, pp. 61-65, October 14-16, 2013.
- Y. Wang and J.K. Hu. "Global ridge orientation modeling for partial fingerprint identification," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 1, pp. 72-87, January, 2011. https://doi.org/10.1109/TPAMI.2010.73
- Y. Lu, S.J Xie, S. Yoon, J.C Yang, and D.S Park, "Robust finger vein ROI localization based on flexible segmentation," Sensors, vol. 13, no. 11, pp.14339-14366, October, 2013. https://doi.org/10.3390/s131114339
- T. Ahonen, A. Hadid, and M. Pietikainen, "Face description with local binary patterns: Application to face recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 12, pp. 2037-2041, December, 2006. https://doi.org/10.1109/TPAMI.2006.244
- L. Bui, D. Tran, X. Huang, and G. Chetty, "Novel metrics for face recognition using local binary patterns," Knowledge-Based and Intelligent Information and Engineering Systems, pp. 436-445, September, 2011.
- Y. Lu, S.J. Xie, Z.H. Wang, S. Yoon, and D.S. Park, "An available database for the research of finger vein recognition," in Proc. of 6th International Congress on Image and Signal Processing, pp. 410-415, December 16-18, 2013.
- Finger vein database MMCBNU_6000:
- X.J. Meng, G.P. Yang, Y.L. Yin, and Y. Xiao, "Finger vein recognition based on local directional code," Sensors, vol. 12, no. 11, pp. 14937-14952, November, 2012. https://doi.org/10.3390/s121114937
- J.F. Yang, and X. Li, "Efficient finger vein localization and recognition," in Proc. of 20th International Conference on Pattern Recognition, pp. 1148-1151, August 23-26, 2010.
Cited by
- Finger Vein Verification with Vein Textons vol.29, pp.4, 2014, https://doi.org/10.1142/s0218001415560030
- Finger vein recognition system for authentication of patient data in hospital vol.8, pp.2, 2014, https://doi.org/10.22376/ijpbs.2017.8.2.b5-10
- A Systematic Review of Finger Vein Recognition Techniques vol.9, pp.9, 2018, https://doi.org/10.3390/info9090213
- A Novel Finger-Vein Recognition Based on Quality Assessment and Multi-Scale Histogram of Oriented Gradients Feature : vol.15, pp.1, 2014, https://doi.org/10.4018/ijeis.2019010106
- Non-deterministic approach to allay replay attack on iris biometric vol.22, pp.2, 2014, https://doi.org/10.1007/s10044-018-0681-8
- Feature Extraction for Finger-Vein-Based Identity Recognition vol.7, pp.5, 2014, https://doi.org/10.3390/jimaging7050089
- Personal authentication based on vascular pattern using finger vein biometric vol.24, pp.5, 2014, https://doi.org/10.1080/09720529.2021.1932900