• 제목/요약/키워드: Biometric Recognition

검색결과 241건 처리시간 0.027초

방사 기저 함수 신경망을 이용한 3차원 얼굴인식 (3D face recognition based on radial basis function network)

  • 양욱일;손광훈
    • 대한전자공학회논문지SP
    • /
    • 제44권2호
    • /
    • pp.82-92
    • /
    • 2007
  • 본 논문에서는 3차원 얼굴인식을 위한 방사 기저 함수 신경망 기반의 새로운 전역적 형태 특징과 그 특징을 추출하는 방법을 제안한다. 방사 기저 함수 신경망은 방사 기저 함수들의 가중합으로써, 얼굴 형태 정보의 비선형성을 방사 기저 함수의 선형합으로 잘 표현한다. 이 논문에서는 얼굴의 가로 방향 프로파일을 학습된 방사 기저 함수 신경망에 적용시켰을 때 생성되는 가증치를 새로운 전역적 형태 특징으로 제안한다. 제안하는 전역적 형태 특징의 경우 국소적 특징의 특성을 가지며, 일반적인 전역적 특징의 특성인 특징의 복잡도도 감소시킨다. 100명의 데이터베이스 영상과 100명에 대한 서로 다른 3개의 포즈를 포함하는 300개의 테스트 영상을 이용한 실험에서 제안하는 전역적 형태 특징과 은닉 마르코프 모델을 이용한 특징 비교를 통해서 94.7%의 인식률을 얻었다.

Using Keystroke Dynamics for Implicit Authentication on Smartphone

  • Do, Son;Hoang, Thang;Luong, Chuyen;Choi, Seungchan;Lee, Dokyeong;Bang, Kihyun;Choi, Deokjai
    • 한국멀티미디어학회논문지
    • /
    • 제17권8호
    • /
    • pp.968-976
    • /
    • 2014
  • Authentication methods on smartphone are demanded to be implicit to users with minimum users' interaction. Existing authentication methods (e.g. PINs, passwords, visual patterns, etc.) are not effectively considering remembrance and privacy issues. Behavioral biometrics such as keystroke dynamics and gait biometrics can be acquired easily and implicitly by using integrated sensors on smartphone. We propose a biometric model involving keystroke dynamics for implicit authentication on smartphone. We first design a feature extraction method for keystroke dynamics. And then, we build a fusion model of keystroke dynamics and gait to improve the authentication performance of single behavioral biometric on smartphone. We operate the fusion at both feature extraction level and matching score level. Experiment using linear Support Vector Machines (SVM) classifier reveals that the best results are achieved with score fusion: a recognition rate approximately 97.86% under identification mode and an error rate approximately 1.11% under authentication mode.

기계학습 기반 저 복잡도 긴장 상태 분류 모델 (Design of Low Complexity Human Anxiety Classification Model based on Machine Learning)

  • 홍은재;박형곤
    • 전기학회논문지
    • /
    • 제66권9호
    • /
    • pp.1402-1408
    • /
    • 2017
  • Recently, services for personal biometric data analysis based on real-time monitoring systems has been increasing and many of them have focused on recognition of emotions. In this paper, we propose a classification model to classify anxiety emotion using biometric data actually collected from people. We propose to deploy the support vector machine to build a classification model. In order to improve the classification accuracy, we propose two data pre-processing procedures, which are normalization and data deletion. The proposed algorithms are actually implemented based on Real-time Traffic Flow Measurement structure, which consists of data collection module, data preprocessing module, and creating classification model module. Our experiment results show that the proposed classification model can infers anxiety emotions of people with the accuracy of 65.18%. Moreover, the proposed model with the proposed pre-processing techniques shows the improved accuracy, which is 78.77%. Therefore, we can conclude that the proposed classification model based on the pre-processing process can improve the classification accuracy with lower computation complexity.

Development of Data Fusion Human Identification System Based on Finger-Vein Pattern-Matching Method and photoplethysmography Identification

  • Ko, Kuk Won;Lee, Jiyeon;Moon, Hongsuk;Lee, Sangjoon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제7권2호
    • /
    • pp.149-154
    • /
    • 2015
  • Biometric techniques for authentication using body parts such as a fingerprint, face, iris, voice, finger-vein and also photoplethysmography have become increasingly important in the personal security field, including door access control, finance security, electronic passport, and mobile device. Finger-vein images are now used to human identification, however, difficulties in recognizing finger-vein images are caused by capturing under various conditions, such as different temperatures and illumination, and noise in the acquisition camera. The human photoplethysmography is also important signal for human identification. In this paper To increase the recognition rate, we develop camera based identification method by combining finger vein image and photoplethysmography signal. We use a compact CMOS camera with a penetrating infrared LED light source to acquire images of finger vein and photoplethysmography signal. In addition, we suggest a simple pattern matching method to reduce the calculation time for embedded environments. The experimental results show that our simple system has good results in terms of speed and accuracy for personal identification compared to the result of only finger vein images.

Multibiometrics fusion using $Acz{\acute{e}}l$-Alsina triangular norm

  • Wang, Ning;Lu, Li;Gao, Ge;Wang, Fanglin;Li, Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권7호
    • /
    • pp.2420-2433
    • /
    • 2014
  • Fusing the scores of multibiometrics is a very promising approach to improve the overall system's accuracy and the verification performance. In recent years, there are several approaches towards studying score level fusion of several biometric systems. However, most of them does not consider the genuine and imposter score distributions and result in a higher equal error rate usually. In this paper, a novel score level fusion approach of different biometric systems (dual iris, thermal and visible face traits) based on $Acz{\acute{e}}l$-Alsina triangular norm is proposed. It achieves higher identification performance as well as acquires a closer genuine distance and larger imposter distance. The experimental tests are conducted on a virtual multibiometrics database, which merges the challenging CASIA-Iris-Thousand database with noisy samples and the NVIE face database with visible and thermal face images. The rigorous results suggest that significant performance improvement can be achieved after the implementation of multibiometrics. The comparative experiments also ascertain that the proposed fusion approach outperforms the state-of-art verification performance.

다양한 장치에서 JWT 토큰을 이용한 FIDO UAF 연계 인증 연구 (A Study on FIDO UAF Federated Authentication Using JWT Token in Various Devices)

  • 김형겸;김기천
    • 디지털산업정보학회논문지
    • /
    • 제16권4호
    • /
    • pp.43-53
    • /
    • 2020
  • There are three standards for FIDO1 authentication technology: Universal Second Factor (U2F), Universal Authentication Framework (UAF), and Client to Authenticator Protocols (CTAP). FIDO2 refers to the WebAuthn standard established by W3C for the creation and use of a certificate in a web application that complements the existing CTAP. In Korea, the FIDO certified market is dominated by UAF, which deals with standards for smartphone (Android, iOS) apps owned by the majority of the people. As the market requires certification through FIDO on PCs, FIDO Alliance and W3C established standards that can be certified on the platform-independent Web and published 『Web Authentication: An API for Accessing Public Key Credentials Level 1』 on March 4, 2019. Most PC do not contain biometrics, so they are not being utilized contrary to expectations. In this paper, we intend to present a model that allows login in PC environment through biometric recognition of smartphone and FIDO UAF authentication. We propose a model in which a user requests login from a PC and performs FIDO authentication on a smartphone, and authentication is completed on the PC without any other user's additional gesture.

다중 바이오 인증에서 특징 융합과 결정 융합의 결합 (Combining Feature Fusion and Decision Fusion in Multimodal Biometric Authentication)

  • 이경희
    • 정보보호학회논문지
    • /
    • 제20권5호
    • /
    • pp.133-138
    • /
    • 2010
  • 본 논문은 얼굴과 음성 정보를 사용한 다중 바이오 인증에서, 특정 단계의 융합과 결정 단계의 융합을 동시에 수행하는 다단계 융합 방법을 제안한다. 얼굴과 음성 특징을 1차 융합한 얼굴 음성 융합특징에 대해 Support Vector Machines(SVM)을 생성한 후, 이 융합특징 SVM 인증기의 결정과 얼굴 SVM 인증기의 결정, 음성 SVM 인증기의 결정들을 다시 2차 융합하여 최종 인증 여부를 결정한다. XM2VTS 멀티모달 데이터베이스를 사용하여 특징 단계 융합, 결정 단계 융합, 다단계 융합 인증을 비교 실험한 결과, 제안한 다단계 융합에 의한 인증이 가장 우수한 성능을 보였다.

스마트 개인 인식기반 비접촉 체열측정기 융합 출입통제시스템의 글로벌 시장 진출전략 (Global Market Entry Strategy for Smart Personal Recognition-based Non-contact Thermometer Convergence access Control System)

  • 정재승;김형오
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.673-675
    • /
    • 2021
  • 생체인식 기술은 신뢰성과 편의성을 제공하는 차세대 정보보안기술이며, 글로벌 시장 규모의 상승추세와 비접촉 방식의 수요도 증대되고 있다. 포스트 코로나 시대 핵심기술로 비접촉방식의 생체인식 출입통제 시스템이 비대면과 자동화 기술까지 가능하여, 국내 뿐만 아니라 미국, 유럽, 중동 등 해외시장에서 크레 활약하고 있다. IoT 기반 스마트 디바이스 맞춤형 센서 개발과 H/W 시스템 확보에 기업간 협조와 미국 식약청의 허가 등 요구사항을 반영하여야 하는 주의사항이 있다.

  • PDF

가시광선 영상과 적외선 영상의 융합을 이용한 조명변화에 강인한 얼굴 인식 (Robust Face Recognition Against Illumination Change Using Visible and Infrared Images)

  • 김사문;이대종;송창규;전명근
    • 한국지능시스템학회논문지
    • /
    • 제24권4호
    • /
    • pp.343-348
    • /
    • 2014
  • 얼굴인식은 인식과정에서 인식자에게 거부감을 유발하지 않고, 적극적인 행위 없이 자동으로 인식 과정을 거치는 장점이 있다. 그러나 촬영 환경에서의 조명 변화로 인하여 다른 인식 방법인 지문 인식이나 홍채 인식에 비하여 인식률이 저하되는 단점이 있다. 따라서 본 논문에서는 퍼지 선형판별분석법을 기반으로 가시광선 영상과 적외선 영상의 웨이블릿 대역의 선택적 융합방법을 이용하여 조명 변화에 강인한 얼굴 인식 방법을 제안한다. 첫 번째 단계에서 가시광선 영상과 적외선 영상을 웨이블릿 변환하여 4개의 대역으로 분할한다. 두 번째 단계에서 각 대역에 해당하는 학습영상과 테스트 영상의 유클리디안 거리를 계산한다. 세 번째로 앞서 계산된 유클리디안 거리를 이용하여 각 대역에서의 인식 실험을 수행하고, 4개 대역에서의 인식률을 고려하여 가중치를 설정한다. 마지막으로 부여된 가중치와 해당 대역의 유클리디안 거리를 융합하여 얼굴인식을 수행하여 외부 변화에 강인한 얼굴 인식 결과를 얻었다.

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • 제16권1호
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.