• Title/Summary/Keyword: Facial Components

Search Result 132, Processing Time 0.027 seconds

Comparison of Computer and Human Face Recognition According to Facial Components

  • Nam, Hyun-Ha;Kang, Byung-Jun;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.1
    • /
    • pp.40-50
    • /
    • 2012
  • Face recognition is a biometric technology used to identify individuals based on facial feature information. Previous studies of face recognition used features including the eye, mouth and nose; however, there have been few studies on the effects of using other facial components, such as the eyebrows and chin, on recognition performance. We measured the recognition accuracy affected by these facial components, and compared the differences between computer-based and human-based facial recognition methods. This research is novel in the following four ways compared to previous works. First, we measured the effect of components such as the eyebrows and chin. And the accuracy of computer-based face recognition was compared to human-based face recognition according to facial components. Second, for computer-based recognition, facial components were automatically detected using the Adaboost algorithm and active appearance model (AAM), and user authentication was achieved with the face recognition algorithm based on principal component analysis (PCA). Third, we experimentally proved that the number of facial features (when including eyebrows, eye, nose, mouth, and chin) had a greater impact on the accuracy of human-based face recognition, but consistent inclusion of some feature such as chin area had more influence on the accuracy of computer-based face recognition because a computer uses the pixel values of facial images in classifying faces. Fourth, we experimentally proved that the eyebrow feature enhanced the accuracy of computer-based face recognition. However, the problem of occlusion by hair should be solved in order to use the eyebrow feature for face recognition.

Face and Its Components Extraction of Animation Characters Based on Dominant Colors (주색상 기반의 애니메이션 캐릭터 얼굴과 구성요소 검출)

  • Jang, Seok-Woo;Shin, Hyun-Min;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.93-100
    • /
    • 2011
  • The necessity of research on extracting information of face and facial components in animation characters have been increasing since they can effectively express the emotion and personality of characters. In this paper, we introduce a method to extract face and facial components of animation characters by defining a mesh model adequate for characters and by using dominant colors. The suggested algorithm first generates a mesh model for animation characters, and extracts dominant colors for face and facial components by adapting the mesh model to the face of a model character. Then, using the dominant colors, we extract candidate areas of the face and facial components from input images and verify if the extracted areas are real face or facial components by means of color similarity measure. The experimental results show that our method can reliably detect face and facial components of animation characters.

A Study on Local Micro Pattern for Facial Expression Recognition (얼굴 표정 인식을 위한 지역 미세 패턴 기술에 관한 연구)

  • Jung, Woong Kyung;Cho, Young Tak;Ahn, Yong Hak;Chae, Ok Sam
    • Convergence Security Journal
    • /
    • v.14 no.5
    • /
    • pp.17-24
    • /
    • 2014
  • This study proposed LDP (Local Directional Pattern) as a new local micro pattern for facial expression recognition to solve noise sensitive problem of LBP (Local Binary Pattern). The proposed method extracts 8-directional components using $m{\times}m$ mask to solve LBP's problem and choose biggest k components, each chosen component marked with 1 as a bit, otherwise 0. Finally, generates a pattern code with bit sequence as 8-directional components. The result shows better performance of rotation and noise adaptation. Also, a new local facial feature can be developed to present both PFF (permanent Facial Feature) and TFF (Transient Facial Feature) based on the proposed method.

Local Appearance-based Face Recognition Using SVM and PCA (SVM과 PCA를 이용한 국부 외형 기반 얼굴 인식 방법)

  • Park, Seung-Hwan;Kwak, No-Jun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.54-60
    • /
    • 2010
  • The local appearance-based method is one of the face recognition methods that divides face image into small areas and extracts features from each area of face image using statistical analysis. It collects classification results of each area and decides identity of a face image using a voting scheme by integrating classification results of each area of a face image. The conventional local appearance-based method divides face images into small pieces and uses all the pieces in recognition process. In this paper, we propose a local appearance-based method that makes use of only the relatively important facial components. The proposed method detects the facial components such as eyes, nose and mouth that differs much from person to person. In doing so, the proposed method detects exact locations of facial components using support vector machines (SVM). Based on the detected facial components, a number of small images that contain the facial parts are constructed. Then it extracts features from each facial component image using principal components analysis (PCA). We compared the performance of the proposed method to those of the conventional methods. The results show that the proposed method outperforms the conventional local appearance-based method while preserving the advantages of the conventional local appearance-based method.

Facial-feature Detection in Color Images using Chrominance Components and Mean-Gray Morphology Operation (색도정보와 Mean-Gray 모폴로지 연산을 이용한 컬러영상에서의 얼굴특징점 검출)

  • 강영도;양창우;김장형
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.714-720
    • /
    • 2004
  • In detecting human faces in color images, additional geometric computation is often necessary for validating the face-candidate regions having various forms. In this paper, we propose a method that detects the facial features using chrominance components of color which do not affected by face occlusion and orientation. The proposed algorithm uses the property that the Cb and Cr components have consistent differences around the facial features, especially eye-area. We designed the Mean-Gray Morphology operator to emphasize the feature areas in the eye-map image which generated by basic chrominance differences. Experimental results show that this method can detect the facial features under various face candidate regions effectively.

Geometrical Feature-Based Detection of Pure Facial Regions (기하학적 특징에 기반한 순수 얼굴영역 검출기법)

  • 이대호;박영태
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.773-779
    • /
    • 2003
  • Locating exact position of facial components is a key preprocessing for realizing highly accurate and reliable face recognition schemes. In this paper, we propose a simple but powerful method for detecting isolated facial components such as eyebrows, eyes, and a mouth, which are horizontally oriented and have relatively dark gray levels. The method is based on the shape-resolving locally optimum thresholding that may guarantee isolated detection of each component. We show that pure facial regions can be determined by grouping facial features satisfying simple geometric constraints on unique facial structure. In the test for over 1000 images in the AR -face database, pure facial regions were detected correctly for each face image without wearing glasses. Very few errors occurred in the face images wearing glasses with a thick frame because of the occluded eyebrow -pairs. The proposed scheme may be best suited for the later stage of classification using either the mappings or a template matching, because of its capability of handling rotational and translational variations.

Analysis and Syntheris of Facial Images for Age Change (나이변화를 위한 얼굴영상의 분석과 합성)

  • 박철하;최창석;최갑석
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.9
    • /
    • pp.101-111
    • /
    • 1994
  • The human face can provide a great deal of information in regard to his/her race, age, sex, personality, feeling, psychology, mental state, health condition and ect. If we pay a close attention to the aging process, we are able to find out that there are recognizable phenomena such as eyelid drooping, cheek drooping, forehead furrowing, hair falling-out, the hair becomes gray and etc. This paper proposes that the method to estimate the age by analyzing these feature components for the facial image. Ang we also introduce the method of facial image synthesis in accordance with the cange of age. The feature components according to the change of age can be obtainec by dividing the facial image into the 3-dimensional shape of a face and the texture of a face and then analyzing the principle component respectively using 3-dimensional model. We assume the age of the facial image by comparing the extracted feature component to the facial image and synthesize the resulted image by adding or subtracting the feature component to/from the facial image. As a resurt of this simulation, we have obtained the age changed ficial image of high quality.

  • PDF

Facial Expression Explorer for Realistic Character Animation

  • Ko, Hee-Dong;Park, Moon-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.16.1-164
    • /
    • 1998
  • This paper describes Facial Expression Explorer to search for the components of a facial expression and to map the expression to other expressionless figures like a robot, frog, teapot, rabbit and others. In general, it is a time-consuming and laborious job to create a facial expression manually, especially when the facial expression must personify a well-known public figure or an actor. In order to extract a blending ratio from facial images automatically, the Facial Expression Explorer uses Networked Genetic Algorithm(NGA) which is a fast method for the convergence by GA. The blending ratio is often used to create facial expressions through shape blending methods by animators. With the Facial Expression Explorer a realistic facial expression can be modeled more efficiently.

Evaluation of the mandibular asymmetry using the facial photographs and the radiographs (방사선사진과 안모사진을 이용한 하악 비대칭의 평가)

  • Lee Sul-Mi
    • Imaging Science in Dentistry
    • /
    • v.31 no.4
    • /
    • pp.199-204
    • /
    • 2001
  • Purpose : To assess the relationship between soft tissue asymmetry and bone tissue asymmetry using the standardized photographs and the posteroanterior (PA) cephalometric radiographs in mandibular asymmetric patients. And to clarify that the lack of morphologic balance among different skeletal components can often be masked by compensatory soft tissue contributions. Methods: Experimental group consisted of 58 patients whose chief complaints were facial asymmetry, they were taken with standardized facial photographs and PA cephalometric radiographs. Control group consisted of 30 persons in the normal occlusion. The reproducibility of the facial photograph was confirmed by model test. The differences of fractional vertical heightand horizontal width from standardized facial photographs and PA cephalometric radiographs were compared and analyzed. Results: The difference of fractional vertical bone height was 0.63 and fractional vertical soft height was 0.58 in control group, 3.10 and 2.01 in asymmetric group, respectively. The difference of fractional horizontal bone width was 0.52 and fractional horizontal soft width was 0.70 in control group, 2.51 and 1.70 in asymmetric group, respectively. Both soft and bone tissue showed significant difference between control and asymmetric group (p<0.05). The difference of bone tissue was greater than that of soft tissue (p<0.05) in the experimental group but, not in control group. Conclusions: Soft tissue components may compensate for underlying skeletal imbalances.

  • PDF

Facial Expression Classification through Covariance Matrix Correlations

  • Odoyo, Wilfred O.;Cho, Beom-Joon
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.5
    • /
    • pp.505-509
    • /
    • 2011
  • This paper attempts to classify known facial expressions and to establish the correlations between two regions (eye + eyebrows and mouth) in identifying the six prototypic expressions. Covariance is used to describe region texture that captures facial features for classification. The texture captured exhibit the pattern observed during the execution of particular expressions. Feature matching is done by simple distance measure between the probe and the modeled representations of eye and mouth components. We target JAFFE database in this experiment to validate our claim. A high classification rate is observed from the mouth component and the correlation between the two (eye and mouth) components. Eye component exhibits a lower classification rate if used independently.