DOI QR코드

DOI QR Code

A Study on Biometric Model for Information Security

정보보안을 위한 생체 인식 모델에 관한 연구

  • 김준영 (국립순천대학교 IT-Bio융합시스템전공) ;
  • 정세훈 (국립순천대학교 컴퓨터공학과) ;
  • 심춘보 (국립순천대학교 인공지능공학부)
  • Received : 2023.12.18
  • Accepted : 2024.02.17
  • Published : 2024.02.29

Abstract

Biometric recognition is a technology that determines whether a person is identified by extracting information on a person's biometric and behavioral characteristics with a specific device. Cyber threats such as forgery, duplication, and hacking of biometric characteristics are increasing in the field of biometrics. In response, the security system is strengthened and complex, and it is becoming difficult for individuals to use. To this end, multiple biometric models are being studied. Existing studies have suggested feature fusion methods, but comparisons between feature fusion methods are insufficient. Therefore, in this paper, we compared and evaluated the fusion method of multiple biometric models using fingerprint, face, and iris images. VGG-16, ResNet-50, EfficientNet-B1, EfficientNet-B4, EfficientNet-B7, and Inception-v3 were used for feature extraction, and the fusion methods of 'Sensor-Level', 'Feature-Level', 'Score-Level', and 'Rank-Level' were compared and evaluated for feature fusion. As a result of the comparative evaluation, the EfficientNet-B7 model showed 98.51% accuracy and high stability in the 'Feature-Level' fusion method. However, because the EfficietnNet-B7 model is large in size, model lightweight studies are needed for biocharacteristic fusion.

생체 인식은 사람의 생체적, 행동적 특징 정보를 특정 장치로 추출하여 본인 여부를 판별하는 기술이다. 생체 인식 분야에서 생체 특성 위조, 복제, 해킹 등 사이버 위협이 증가하고 있다. 이에 대응하여 보안 시스템이 강화되고 복잡해지며, 개인이 사용하기 어려워지고 있다. 이를 위해 다중 생체 인식 모델이 연구되고 있다. 기존 연구들은 특징 융합 방법을 제시하고 있으나, 특징 융합 방법 간의 비교는 부족하다. 이에 본 논문에서는 지문, 얼굴, 홍채 영상을 이용한 다중 생체 인식 모델의 융합 방법을 비교 평가했다. 특징 추출을 위해VGG-16, ResNet-50, EfficientNet-B1, EfficientNet-B4, EfficientNet-B7, Inception-v3를 사용했으며, 특성융합을 위해 'Sensor-Level', 'Feature-Level', 'Score-Level', 'Rank-Level' 융합 방법을 비교 평가했다. 비교평가결과 'Feature-Level' 융합 방법에서 EfficientNet-B7 모델이 98.51%의 정확도를 보이며 높은 안정성을 보였다. 그러나 EfficietnNet-B7모델의 크기가 크기 때문에 생체 특성 융합을 위한 모델 경량화 연구가 필요하다.

Keywords

Acknowledgement

이 논문은 국립순천대학교 국립대학육성사업에 의해 연구되었음

References

  1. P. S. Sanjekar and J. B. Patil, "An overview of multimodal biometrics," Signal & Image Processing, vol. 4 no. 1, 2013, pp. 57-64.
  2. S. Minaee, A. Abdolrashidi, H. Su, M. Bennamoun, and D. Zhang, "Biometrics recognition using deep learning: A survey," Artificial Intelligence Review, vol. 56, 2023, pp. 1-49.
  3. H. Lee and Y. Kim, "Biometric technology trends using physical characteristics," S&T Market Report, vol. 63, 2018.
  4. Y. Gu, "Opportunities in the biometric industry market using biometric information," ASTI MARKET INSIGHT 2022-086, 2022.
  5. J. Son, H. Lee, H. Bae, Y. Kim, and B. Lee, "Driving under the influence Prevention System Using Fingerprint sensors with Arduino," J. of the Korea Institute of Electronic Communication Sciences, vol. 17, no. 5, 2022, pp. 969-976.
  6. K. Lee, J. Kim, and G. Kwon, "Smart Healthcare Access Management System using Iris Recognitionm," J. of the Korea Institute of Electronic Communication Sciences, vol. 18, no. 5, 2023, pp. 971-980.
  7. M. Ying and K. Kim, "CNN Based 2D and 2.5D Face Recognition For Home Security System," J. of the Korea Institute of Electronic Communication Sciences, vol. 14, no. 6, 2019, pp. 1207-1214.
  8. S. Yoon and K. Kim, "Personal Biometric Identification based on ECG Features," J. of the Korea Institute of Electronic Communication Sciences, vol. 10, no. 4, 2015, pp. 521-526. https://doi.org/10.13067/JKIECS.2015.10.4.521
  9. K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, vol. 1, 2014.
  10. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," Proc. of the IEEE Conf. on computer vision and pattern recognition, Las Vegas, USA, 2016, pp. 770-778.
  11. M. Tan and Q. Le, "Efficientnet: Rethinking model scaling for convolutional neural networks," Int. Conf. on machine learning. PMLR, Long Beach, California, USA, 2019, pp. 6105-6114.
  12. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the inception architecture for computer vision," Proc. of the IEEE Conf. on computer vision and pattern recognition., Las Vegas, USA, 2016, pp. 2818-2826.
  13. H. Jaafar and D. A. Ramli, "A review of multibiometric system with fusion strategies and weighting factor," Int. Journal of Computer Science Engineering, vol. 2, no. 4, 2013, pp. 158-165.
  14. P. Byahatti and S. M. Hatture, "A fusion model for multimodal biometric system," Int. Journal of Engineering Research & Technology, vol. 5, no. 6, 2017, pp. 1-5.
  15. M. Leghar, S, Memon, L. D. Dhomeja, A. H. Jalbani, and A. A. Chandio, "Deep feature fusion of fingerprint and online signature for multimodal biometrics," Computers, vol. 10 no. 2, 2021, pp. 21-35. https://doi.org/10.3390/computers10020021
  16. W. Yang, D. Shi, and W. Zhoum, "Convolutional neural network approach based on multimodal biometric system with fusion of face and finger vein features," Sensors, vol. 22, no. 16, 2022, pp. 6039-6053. https://doi.org/10.3390/s22166039
  17. M. Haghighat, M. Abdel-Mottaleb, and W. Alhalabi, "Discriminant correlation analysis for feature level fusion with application to multimodal biometrics," 2016 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, IEEE, Shanghai, China, 2016, pp. 1866-1870.
  18. E. M. Cherrat, R. Alaoui, and H. Bouzahir, "Convolutional neural networks approach for multimodal biometric identification system using the fusion of fingerprint, finger-vein and face images," PeerJ Computer Science, vol. 6, 2020, pp. e248.
  19. Q. Zhang, H. Li, Z. Sun, and T. Tan, "Deep feature fusion for iris and periocular biometrics on mobile devices," IEEE Transactions on Information Forensics and Security, vol. 13, no. 11, 2018, pp. 2897-2912. https://doi.org/10.1109/TIFS.2018.2833033
  20. N. Alay and H. H. Al-Baity, "Deep learning approach for multimodal biometric recognition system based on fusion of iris, face, and finger vein traits," Sensors, vol. 20, no. 19, 2020, pp. 5523-5539. https://doi.org/10.3390/s20195523
  21. B. Ammour, L. Boubchir, T. Bouden, and M. Ramdani, "Face-iris multimodal biometric identification system," Electronics, vol. 9, no. 1, 2020, pp. 85-102. https://doi.org/10.3390/electronics9010085
  22. W. Kim, J. Song, and K. Park, "Multimodal biometric recognition based on convolutional neural network by the fusion of finger-vein and finger shape using near-infrared (NIR) camera sensor," Sensors, vol. 18, no. 7, 2018, pp. 2296-2329. https://doi.org/10.3390/s18072296
  23. P. Emerson, "The original Borda count and partial voting," Social Choice and Welfare, vol. 40, 2013, pp. 353-358. https://doi.org/10.1007/s00355-011-0603-9
  24. C. Y. J. Peng, K. Lee, and G. M. Ingersoll, "An introduction to logistic regression analysis and reporting," The journal of educational research, vol. 96, no. 1, 2002, pp. 3-14. https://doi.org/10.1080/00220670209598786
  25. Y. Yin, L. Liu, and X. Sun, "SDUMLA-HMT: A multimodal biometric database," Biometric Recognition: 6th Chinese Conf., Beijing, China, Dec. 2011, pp. 260-268.