• Title/Summary/Keyword: appearance based face recognition

Search Result 45, Processing Time 0.026 seconds

Collaborative Local Active Appearance Models for Illuminated Face Images (조명얼굴 영상을 위한 협력적 지역 능동표현 모델)

  • Yang, Jun-Young;Ko, Jae-Pil;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.816-824
    • /
    • 2009
  • In the face space, face images due to illumination and pose variations have a nonlinear distribution. Active Appearance Models (AAM) based on the linear model have limits to the nonlinear distribution of face images. In this paper, we assume that a few clusters of face images are given; we build local AAMs according to the clusters of face images, and then select a proper AAM model during the fitting phase. To solve the problem of updating fitting parameters among the models due to the model changing, we propose to build in advance relationships among the clusters in the parameter space from the training images. In addition, we suggest a gradual model changing to reduce improper model selections due to serious fitting failures. In our experiment, we apply the proposed model to Yale Face Database B and compare it with the previous method. The proposed method demonstrated successful fitting results with strongly illuminated face images of deep shadows.

A Study on Enhancing the Performance of Detecting Lip Feature Points for Facial Expression Recognition Based on AAM (AAM 기반 얼굴 표정 인식을 위한 입술 특징점 검출 성능 향상 연구)

  • Han, Eun-Jung;Kang, Byung-Jun;Park, Kang-Ryoung
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.299-308
    • /
    • 2009
  • AAM(Active Appearance Model) is an algorithm to extract face feature points with statistical models of shape and texture information based on PCA(Principal Component Analysis). This method is widely used for face recognition, face modeling and expression recognition. However, the detection performance of AAM algorithm is sensitive to initial value and the AAM method has the problem that detection error is increased when an input image is quite different from training data. Especially, the algorithm shows high accuracy in case of closed lips but the detection error is increased in case of opened lips and deformed lips according to the facial expression of user. To solve these problems, we propose the improved AAM algorithm using lip feature points which is extracted based on a new lip detection algorithm. In this paper, we select a searching region based on the face feature points which are detected by AAM algorithm. And lip corner points are extracted by using Canny edge detection and histogram projection method in the selected searching region. Then, lip region is accurately detected by combining color and edge information of lip in the searching region which is adjusted based on the position of the detected lip corners. Based on that, the accuracy and processing speed of lip detection are improved. Experimental results showed that the RMS(Root Mean Square) error of the proposed method was reduced as much as 4.21 pixels compared to that only using AAM algorithm.

Face Representation Based on Non-Alpha Weberface and Histogram Equalization for Face Recognition Under Varying Illumination Conditions (조명 변화 환경에서 얼굴 인식을 위한 Non-Alpha Weberface 및 히스토그램 평활화 기반 얼굴 표현)

  • Kim, Ha-Young;Lee, Hee-Jae;Lee, Sang-Goog
    • Journal of KIISE
    • /
    • v.44 no.3
    • /
    • pp.295-305
    • /
    • 2017
  • Facial appearance is greatly influenced by illumination conditions, and therefore illumination variation is one of the factors that degrades performance of face recognition systems. In this paper, we propose a robust method for face representation under varying illumination conditions, combining non-alpha Weberface (non-alpha WF) and histogram equalization. We propose a two-step method: (1) for a given face image, non-alpha WF, which is not applied a parameter for adjusting the intensity difference between neighboring pixels in WF, is computed; (2) histogram equalization is performed to non-alpha WF, to make a uniform histogram distribution globally and to enhance the contrast. $(2D)^2PCA$ is applied to extract low-dimensional discriminating features from the preprocessed face image. Experimental results on the extended Yale B face database and the CMU PIE face database show that the proposed method yielded better recognition rates than several illumination processing methods as well as the conventional WF, achieving average recognition rates of 93.31% and 97.25%, respectively.

Face Tracking and Recognition in Video with PCA-based Pose-Classification and (2D)2PCA recognition algorithm (비디오속의 얼굴추적 및 PCA기반 얼굴포즈분류와 (2D)2PCA를 이용한 얼굴인식)

  • Kim, Jin-Yul;Kim, Yong-Seok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.5
    • /
    • pp.423-430
    • /
    • 2013
  • In typical face recognition systems, the frontal view of face is preferred to reduce the complexity of the recognition. Thus individuals may be required to stare into the camera, or the camera should be located so that the frontal images are acquired easily. However these constraints severely restrict the adoption of face recognition to wide applications. To alleviate this problem, in this paper, we address the problem of tracking and recognizing faces in video captured with no environmental control. The face tracker extracts a sequence of the angle/size normalized face images using IVT (Incremental Visual Tracking) algorithm that is known to be robust to changes in appearance. Since no constraints have been imposed between the face direction and the video camera, there will be various poses in face images. Thus the pose is identified using a PCA (Principal Component Analysis)-based pose classifier, and only the pose-matched face images are used to identify person against the pre-built face DB with 5-poses. For face recognition, PCA, (2D)PCA, and $(2D)^2PCA$ algorithms have been tested to compute the recognition rate and the execution time.

Accurate Face Pose Estimation and Synthesis Using Linear Transform Among Face Models (얼굴 모델간 선형변환을 이용한 정밀한 얼굴 포즈추정 및 포즈합성)

  • Suvdaa, B.;Ko, J.
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.508-515
    • /
    • 2012
  • This paper presents a method that estimates face pose for a given face image and synthesizes any posed face images using Active Appearance Model(AAM). The AAM that having been successfully applied to various applications is an example-based learning model and learns the variations of training examples. However, with a single model, it is difficult to handle large pose variations of face images. This paper proposes to build a model covering only a small range of angle for each pose. Then, with a proper model for a given face image, we can achieve accurate pose estimation and synthesis. In case of the model used for pose estimation was not trained with the angle to synthesize, we solve this problem by training the linear relationship between the models in advance. In the experiments on Yale B public face database, we present the accurate pose estimation and pose synthesis results. For our face database having large pose variations, we demonstrate successful frontal pose synthesis results.

Generation of Changeable Face Template by Combining Independent Component Analysis Coefficients (독립성분 분석 계수의 합성에 의한 가변 얼굴 생체정보 생성 방법)

  • Jeong, Min-Yi;Lee, Chel-Han;Choi, Jeung-Yoon;Kim, Jai--Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.16-23
    • /
    • 2007
  • Changeable biometrics has been developed as a solution to problem of enhancing security and privacy. The idea is to transform a biometric signal or feature into a new one for the purposes of enrollment and matching. In this paper, we propose a changeable biometric system that can be applied to appearance based face recognition system. In the first step when using feature extraction, ICA(Independent Component Analysis) coefficient vectors extracted from an input face image are replaced randomly using their mean and variation. The transformed vectors by replacement are scrambled randomly and a new transformed face coefficient vector (transformed template) is generated by combination of the two transformed vectors. When this transformed template is compromised, it is replaced with new random numbers and a new scrambling rule. Because e transformed template is generated by e addition of two vectors, e original ICA coefficients could not be easily recovered from the transformed coefficients.

Face Recognition Evaluation of an Illumination Property of Subspace Based Feature Extractor (부분공간 기반 특징 추출기의 조명 변인에 대한 얼굴인식 성능 분석)

  • Kim, Kwang-Soo;Boo, Deok-Hee;Ahn, Jung-Ho;Kwak, Soo-Yeong;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.681-687
    • /
    • 2007
  • Face recognition technique is very popular for a personal information security and user identification in recent years. However, the face recognition system is very hard to be implemented due to the difficulty where change in illumination, pose and facial expression. In this paper, we consider that an illumination change causing the variety of face appearance, virtual image data is generated and added to the D-LDA which was selected as the most suitable feature extractor. A less sensitive recognition system in illumination is represented in this paper. This way that consider nature of several illumination directions generate the virtual training image data that considered an illumination effect of the directions and the change of illumination density. As result of experiences, D-LDA has a less sensitive property in an illumination through ORL, Yale University and Pohang University face database.

A Face-Detection Postprocessing Scheme Using a Geometric Analysis for Multimedia Applications

  • Jang, Kyounghoon;Cho, Hosang;Kim, Chang-Wan;Kang, Bongsoon
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.13 no.1
    • /
    • pp.34-42
    • /
    • 2013
  • Human faces have been broadly studied in digital image and video processing fields. An appearance-based method, the adaptive boosting learning algorithm using integral image representations has been successfully employed for face detection, taking advantage of the feature extraction's low computational complexity. In this paper, we propose a face-detection postprocessing method that equalizes instantaneous facial regions in an efficient hardware architecture for use in real-time multimedia applications. The proposed system requires low hardware resources and exhibits robust performance in terms of the movements, zooming, and classification of faces. A series of experimental results obtained using video sequences collected under dynamic conditions are discussed.

Study of Emotion Recognition based on Facial Image for Emotional Rehabilitation Biofeedback (정서재활 바이오피드백을 위한 얼굴 영상 기반 정서인식 연구)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.10
    • /
    • pp.957-962
    • /
    • 2010
  • If we want to recognize the human's emotion via the facial image, first of all, we need to extract the emotional features from the facial image by using a feature extraction algorithm. And we need to classify the emotional status by using pattern classification method. The AAM (Active Appearance Model) is a well-known method that can represent a non-rigid object, such as face, facial expression. The Bayesian Network is a probability based classifier that can represent the probabilistic relationships between a set of facial features. In this paper, our approach to facial feature extraction lies in the proposed feature extraction method based on combining AAM with FACS (Facial Action Coding System) for automatically modeling and extracting the facial emotional features. To recognize the facial emotion, we use the DBNs (Dynamic Bayesian Networks) for modeling and understanding the temporal phases of facial expressions in image sequences. The result of emotion recognition can be used to rehabilitate based on biofeedback for emotional disabled.

Back-Propagation Neural Network Based Face Detection and Pose Estimation (오류-역전파 신경망 기반의 얼굴 검출 및 포즈 추정)

  • Lee, Jae-Hoon;Jun, In-Ja;Lee, Jung-Hoon;Rhee, Phill-Kyu
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.853-862
    • /
    • 2002
  • Face Detection can be defined as follows : Given a digitalized arbitrary or image sequence, the goal of face detection is to determine whether or not there is any human face in the image, and if present, return its location, direction, size, and so on. This technique is based on many applications such face recognition facial expression, head gesture and so on, and is one of important qualify factors. But face in an given image is considerably difficult because facial expression, pose, facial size, light conditions and so on change the overall appearance of faces, thereby making it difficult to detect them rapidly and exactly. Therefore, this paper proposes fast and exact face detection which overcomes some restrictions by using neural network. The proposed system can be face detection irrelevant to facial expression, background and pose rapidily. For this. face detection is performed by neural network and detection response time is shortened by reducing search region and decreasing calculation time of neural network. Reduced search region is accomplished by using skin color segment and frame difference. And neural network calculation time is decreased by reducing input vector sire of neural network. Principle Component Analysis (PCA) can reduce the dimension of data. Also, pose estimates in extracted facial image and eye region is located. This result enables to us more informations about face. The experiment measured success rate and process time using the Squared Mahalanobis distance. Both of still images and sequence images was experimented and in case of skin color segment, the result shows different success rate whether or not camera setting. Pose estimation experiments was carried out under same conditions and existence or nonexistence glasses shows different result in eye region detection. The experiment results show satisfactory detection rate and process time for real time system.