• Title/Summary/Keyword: facial features

Search Result 635, Processing Time 0.024 seconds

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.

A Study on the Individual Authentication Using Facial Information For Online Lecture (가상강의에 적용을 위한 얼굴영상정보를 이용한 개인 인증 방법에 관한 연구)

  • 김동현;권중장
    • Proceedings of the IEEK Conference
    • /
    • 2000.11c
    • /
    • pp.117-120
    • /
    • 2000
  • In this paper, we suggest an authentication system for online lecture using facial information and a face recognition algorithm base on relation of face. First, a facial area on complex background is detected using color information. Second, features are extracted with edge profile. Third, compare it with the value of original facial image in database. By experiments, we know that the proposed system is an useful method for online lecture authentication system.

  • PDF

Automatic Facial Expression Recognition using Tree Structures for Human Computer Interaction (HCI를 위한 트리 구조 기반의 자동 얼굴 표정 인식)

  • Shin, Yun-Hee;Ju, Jin-Sun;Kim, Eun-Yi;Kurata, Takeshi;Jain, Anil K.;Park, Se-Hyun;Jung, Kee-Chul
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.3
    • /
    • pp.60-68
    • /
    • 2007
  • In this paper, we propose an automatic facial expressions recognition system to analyze facial expressions (happiness, disgust, surprise and neutral) using tree structures based on heuristic rules. The facial region is first obtained using skin-color model and connected-component analysis (CCs). Thereafter the origins of user's eyes are localized using neural network (NN)-based texture classifier, then the facial features using some heuristics are localized. After detection of facial features, the facial expression recognition are performed using decision tree. To assess the validity of the proposed system, we tested the proposed system using 180 facial image in the MMI, JAFFE, VAK DB. The results show that our system have the accuracy of 93%.

  • PDF

Reconstructing 3-D Facial Shape Based on SR Imagine

  • Hong, Yu-Jin;Kim, Jaewon;Kim, Ig-Jae
    • Journal of International Society for Simulation Surgery
    • /
    • v.1 no.2
    • /
    • pp.57-61
    • /
    • 2014
  • We present a robust 3D facial reconstruction method using a single image generated by face-specific super resolution technique. Based on the several consecutive frames with low resolution, we generate a single high resolution image and a three dimensional facial model based on it. To do this, we apply PME method to compute patch similarities for SR after two-phase warping according to facial attributes. Based on the SRI, we extract facial features automatically and reconstruct 3D facial model with basis which selected adaptively according to facial statistical data less than a few seconds. Thereby, we can provide the facial image of various points of view which cannot be given by a single point of view of a camera.

Human Emotion Recognition based on Variance of Facial Features (얼굴 특징 변화에 따른 휴먼 감성 인식)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.16 no.4
    • /
    • pp.79-85
    • /
    • 2017
  • Understanding of human emotion has a high importance in interaction between human and machine communications systems. The most expressive and valuable way to extract and recognize the human's emotion is by facial expression analysis. This paper presents and implements an automatic extraction and recognition scheme of facial expression and emotion through still image. This method has three main steps to recognize the facial emotion: (1) Detection of facial areas with skin-color method and feature maps, (2) Creation of the Bezier curve on eyemap and mouthmap, and (3) Classification and distinguish the emotion of characteristic with Hausdorff distance. To estimate the performance of the implemented system, we evaluate a success-ratio with emotional face image database, which is commonly used in the field of facial analysis. The experimental result shows average 76.1% of success to classify and distinguish the facial expression and emotion.

  • PDF

Emotion Recognition based on Tracking Facial Keypoints (얼굴 특징점 추적을 통한 사용자 감성 인식)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.1
    • /
    • pp.97-101
    • /
    • 2019
  • Understanding and classification of the human's emotion play an important tasks in interacting with human and machine communication systems. This paper proposes a novel emotion recognition method by extracting facial keypoints, which is able to understand and classify the human emotion, using active Appearance Model and the proposed classification model of the facial features. The existing appearance model scheme takes an expression of variations, which is calculated by the proposed classification model according to the change of human facial expression. The proposed method classifies four basic emotions (normal, happy, sad and angry). To evaluate the performance of the proposed method, we assess the ratio of success with common datasets, and we achieve the best 93% accuracy, average 82.2% in facial emotion recognition. The results show that the proposed method effectively performed well over the emotion recognition, compared to the existing schemes.

A Flexible Feature Matching for Automatic Facial Feature Points Detection (얼굴 특징점 자동 검출을 위한 탄력적 특징 정합)

  • Hwang, Suen-Ki;Bae, Cheol-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.3 no.2
    • /
    • pp.12-17
    • /
    • 2010
  • An automatic facial feature points(FFPs) detection system is proposed. A face is represented as a graph where the nodes are placed at facial feature points(FFPs) labeled by their Gabor features and the edges are describes their spatial relations. An innovative flexible feature matching is proposed to perform features correspondence between models and the input image. This matching model works likes random diffusion process in the image space by employing the locally competitive and globally corporative mechanism. The system works nicely on the face images under complicated background, pose variations and distorted by facial accessories. We demonstrate the benefits of our approach by its implementation on the system.

  • PDF

Vector-based Face Generation using Montage and Shading Method (몽타주 기법과 음영합성 기법을 이용한 벡터기반 얼굴 생성)

  • 박연출;오해석
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.6
    • /
    • pp.817-828
    • /
    • 2004
  • In this paper, we propose vector-based face generation system that uses montage and shading method and preserves designer(artist)'s style. Proposed system generates character's face similar to human face automatically using facial features that extracted from a photograph. In addition, unlike previous face generation system that uses contours, we propose the system is based on color and composes face from facial features and shade extracted from a photograph. Thus, it has advantages that can make more realistic face similar to human face. Since this system is vector-based, the generated character's face has no size limit and constraint. Therefore it is available to transform the shape freely and to apply various facial expressions to 2D face. Moreover, it has distinctiveness with another approaches in point that can keep artist's impression just as it is in result.

A Flexible Feature Matching for Automatic face and Facial feature Points Detection (얼굴과 얼굴 특징점 자동 검출을 위한 탄력적 특징 정합)

  • 박호식;손형경;정연길;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.608-612
    • /
    • 2002
  • An automatic face and facial feature points(FFPs) detection system is proposed. A face is represented as a graph where the nodes are placed at facial feature points(FFPs) labeled by their Gabor features md the edges are describes their spatial relations. An innovative flexible feature matching is proposed to perform features correspondence between models and the input image. This matching model works likes random diffusion process in the image spare by employing the locally competitive and globally corporative mechanism. The system works nicely on the face images under complicated background, pose variations and distorted by facial accessories. We demonstrate the benefits of our approach by its implementation on the fare identification system.

  • PDF

Robust Extraction of Heartbeat Signals from Mobile Facial Videos (모바일 얼굴 비디오로부터 심박 신호의 강건한 추출)

  • Lomaliza, Jean-Pierre;Park, Hanhoon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.1
    • /
    • pp.51-56
    • /
    • 2019
  • This paper proposes an improved heartbeat signal extraction method for ballistocardiography(BCG)-based heart-rate measurement on mobile environment. First, from a mobile facial video, a handshake-free head motion signal is extracted by tracking facial features and background features at the same time. Then, a novel signal periodicity computation method is proposed to accurately separate out the heartbeat signal from the head motion signal. The proposed method could robustly extract heartbeat signals from mobile facial videos, and enabled more accurate heart rate measurement (measurement errors were reduced by 3-4 bpm) compared to the existing method.