• Title/Summary/Keyword: facial classification

Search Result 243, Processing Time 0.023 seconds

Realistic Avatar Face Generation Using Shading Mechanism (음영합성 기법을 이용한 실사형 아바타 얼굴 생성)

  • Park Yeon-Chool
    • Journal of Internet Computing and Services
    • /
    • v.5 no.5
    • /
    • pp.79-91
    • /
    • 2004
  • This paper proposes avatar face generation system that uses shading mechanism and facial features extraction method of facial recognition. Proposed system generates avatar face similar to human face automatically using facial features that extracted from a photo. And proposed system is an approach which compose shade and facial features. Thus, it has advantages that can make more realistic avatar face similar to human face. This paper proposes new eye localization method, facial features extraction method, classification method for minimizing retrieval time, image retrieval method by similarity measure, and realistic avatar face generation method by mapping facial features with shaded face pane.

  • PDF

Local Feature Based Facial Expression Recognition Using Adaptive Decision Tree (적응형 결정 트리를 이용한 국소 특징 기반 표정 인식)

  • Oh, Jihun;Ban, Yuseok;Lee, Injae;Ahn, Chunghyun;Lee, Sangyoun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.2
    • /
    • pp.92-99
    • /
    • 2014
  • This paper proposes the method of facial expression recognition based on decision tree structure. In the image of facial expression, ASM(Active Shape Model) and LBP(Local Binary Pattern) make the local features of a facial expressions extracted. The discriminant features gotten from local features make the two facial expressions of all combination classified. Through the sum of true related to classification, the combination of facial expression and local region are decided. The integration of branch classifications generates decision tree. The facial expression recognition based on decision tree shows better recognition performance than the method which doesn't use that.

Facial Image Type Classification and Shape Differences focus on 20s Korean Women (20대 한국여성의 얼굴이미지 유형과 형태적 특성)

  • Baek, Kyoung-Jin;Kim, Young-In
    • Journal of the Korean Society of Costume
    • /
    • v.64 no.3
    • /
    • pp.62-76
    • /
    • 2014
  • The purpose of this study is to classify the facial images and analyze shape characteristics of Korean women in their 20s. Previous research and survey were used for the study, the surveys targeted 220 university students in their 20s. The subjects of the experiment were 20-24 year-old Korean women. SPSS 12.0 statistics program was used to analyze the results, and factor analysis, Cronbach's ${\alpha}$ reliability analysis, and multidimensional scaling(MDS) were executed. The results of the study are as follows: First, the facial image types of Korean women in their 20s were classified into 4 categories as 'Youthfulness', 'Classiness', 'Friendliness', and 'Activeness'. Second, the multi-dimensional scaling method was performed and two orthogonal dimensions for the facial image of the Korean women were suggested: strong - soft and classy-friendly. Third, by analyzing the basic statistics concerning the structural characteristics of facial image of Korean women, there were differences in structural characteristics that form the facial images. Especially, significant difference appeared in items related forehead, eyebrows, eyes and jaw.

The relation between idiopathic scoliosis and the frontal and lateral facial form

  • Kim, Tae-Hwan;Kim, Joo-Hwan;Kim, Yae-Jin;Cho, Il-Sik;Lim, Yong-Kyu;Lee, Dong-Yul
    • The korean journal of orthodontics
    • /
    • v.44 no.5
    • /
    • pp.254-262
    • /
    • 2014
  • Objective: The purpose of this study was to evaluate the relation between idiopathic scoliosis and facial deformity in the horizontal, vertical, and anteroposterior planes. Methods: A total of 123 female patients aged 14 years or older, who visited the Spine Clinic at the Department of Orthopedics, Korea University Guro Hospital for treatment of idiopathic scoliosis, were enrolled. Whole-spine anteroposterior and lateral radiographs were taken with the patient in a naturally erect position, and frontal and lateral cephalograms were taken in an erect position with the Frankfort horizontal line parallel to the floor. Scoliosis was classified according to the Cobb angle and Lenke classification of six curve types. Cephalometric tracing in all cases was carried out with V-Ceph 5.5 by the same orthodontist. The Kruskal-Wallis test was performed to determine whether any relation existed between each group of the idiopathic scoliosis classification and the cephalometric measurements of frontal and lateral cephalograms. Results: The measurements did not reveal any significant association between the Cobb angle and cephalometric measurements and between the curve type based on the Lenke classification and cephalometric measurements. Conclusions: Based on the results of this study, no apparent relation was observed between the severity of scoliosis and facial form variations in idiopathic scoliosis patients.

A Study on the Characteristics of Facial Shape in Adult Women by Sasang Constitution Using Hyungsang Classification (형상분류를 이용한 성인여성의 체질별 안면형태 특징에 관한 연구)

  • Jeon, Soo-Hyung;Kim, Jong-Won
    • Journal of Sasang Constitutional Medicine
    • /
    • v.29 no.2
    • /
    • pp.95-103
    • /
    • 2017
  • Objectives This study was aimed to analyze characteristics of facial shapes in adult women by sasang constitution using hyungsang classification. Methods Using a digital camera, we took a picture of 1,011 women who participated in clinical study on menstrual pain and acquired their 3D facial images with a face-only scanner. They filled out SSCQ-P(sasang constitution questionnaire for patient) for the diagnosis of sasang constitution. Based on the above photographs and 3D images, one of the hyungsang medicine specialist diagnosed according to five diagnostic criteria. The sasang constitution was diagnosed by referring to questionnaires and photographs. Frequency analysis was performed using the statistical analysis system version 9.4 and chi-square test was performed for validity evaluation. Results In taeeumin, the wide face shape(n=261, 74.36%) was much more than the narrow shape(n=90, 25.64%) and the convex face profile(n=164, 85.86%) was much more than the concave profile(n=27, 14.14%). Regardless of sasang constitution, angular face shape(n=501, 50%) was the most, followed by oval shape(n=317, 31.64%). Subjects with big ears(n=291, 29.19%) were the most, while big eyes(n=104, 10.43%) were the least. Subjects with eyes and nose tip upward(n=615, 78.05%) were the most, while eyes and nose tip downward(n=22, 2.79%) were the least. Conclusions Most Korean adult women have angular face, such as square or diamond, with slanted eyes and upturned nose. Taeeumin women have wide facial shape and convex profile.

Feature-Oriented Adaptive Motion Analysis For Recognizing Facial Expression (특징점 기반의 적응적 얼굴 움직임 분석을 통한 표정 인식)

  • Noh, Sung-Kyu;Park, Han-Hoon;Shin, Hong-Chang;Jin, Yoon-Jong;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.667-674
    • /
    • 2007
  • Facial expressions provide significant clues about one's emotional state; however, it always has been a great challenge for machine to recognize facial expressions effectively and reliably. In this paper, we report a method of feature-based adaptive motion energy analysis for recognizing facial expression. Our method optimizes the information gain heuristics of ID3 tree and introduces new approaches on (1) facial feature representation, (2) facial feature extraction, and (3) facial feature classification. We use minimal reasonable facial features, suggested by the information gain heuristics of ID3 tree, to represent the geometric face model. For the feature extraction, our method proceeds as follows. Features are first detected and then carefully "selected." Feature "selection" is finding the features with high variability for differentiating features with high variability from the ones with low variability, to effectively estimate the feature's motion pattern. For each facial feature, motion analysis is performed adaptively. That is, each facial feature's motion pattern (from the neutral face to the expressed face) is estimated based on its variability. After the feature extraction is done, the facial expression is classified using the ID3 tree (which is built from the 1728 possible facial expressions) and the test images from the JAFFE database. The proposed method excels and overcomes the problems aroused by previous methods. First of all, it is simple but effective. Our method effectively and reliably estimates the expressive facial features by differentiating features with high variability from the ones with low variability. Second, it is fast by avoiding complicated or time-consuming computations. Rather, it exploits few selected expressive features' motion energy values (acquired from intensity-based threshold). Lastly, our method gives reliable recognition rates with overall recognition rate of 77%. The effectiveness of the proposed method will be demonstrated from the experimental results.

  • PDF

Emotion Recognition using Facial Thermal Images

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.3
    • /
    • pp.427-435
    • /
    • 2012
  • The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.

Multi-Frame Face Classification with Decision-Level Fusion based on Photon-Counting Linear Discriminant Analysis

  • Yeom, Seokwon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.332-339
    • /
    • 2014
  • Face classification has wide applications in security and surveillance. However, this technique presents various challenges caused by pose, illumination, and expression changes. Face recognition with long-distance images involves additional challenges, owing to focusing problems and motion blurring. Multiple frames under varying spatial or temporal settings can acquire additional information, which can be used to achieve improved classification performance. This study investigates the effectiveness of multi-frame decision-level fusion with photon-counting linear discriminant analysis. Multiple frames generate multiple scores for each class. The fusion process comprises three stages: score normalization, score validation, and score combination. Candidate scores are selected during the score validation process, after the scores are normalized. The score validation process removes bad scores that can degrade the final output. The selected candidate scores are combined using one of the following fusion rules: maximum, averaging, and majority voting. Degraded facial images are employed to demonstrate the robustness of multi-frame decision-level fusion in harsh environments. Out-of-focus and motion blurring point-spread functions are applied to the test images, to simulate long-distance acquisition. Experimental results with three facial data sets indicate the efficiency of the proposed decision-level fusion scheme.

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".