• Title/Summary/Keyword: unimodal biometric

Search Result 2, Processing Time 0.017 seconds

Technology Review on Multimodal Biometric Authentication (다중 생체인식 기반의 인증기술과 과제)

  • Cho, Byungchul;Park, Jong-Man
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.1
    • /
    • pp.132-141
    • /
    • 2015
  • There might have been weakness in securing user authentication or verification with real time service approach, while existing unimodal biometric authentication has been used mainly for user identification and recognition. Accordingly, it is essential to research and develop ways that upgrade security performance with multi biometric based real time authentication and verification technology. This paper focused to suggest binding assignment and strategy for developing multi biometric authentication technology through investigation of advanced study and patents. Description includes introduction, technology outline, technology trend, patent analysis, and conclusion.

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.