• Title/Summary/Keyword: Facial components

Search Result 133, Processing Time 0.025 seconds

The affective components of facial beauty (아름다운 얼굴의 감성적 특징)

  • 김한경;박수진;정찬섭
    • Science of Emotion and Sensibility
    • /
    • v.7 no.1
    • /
    • pp.23-28
    • /
    • 2004
  • In this paper, we investigated the affective components of facial beauty. In study 1, we did factor analysis of affective evaluations of the faces, and about 65% of the variances are explained by only two factors. Two factors were named 'sharp' and 'soft', respectively. In study 2, the correlation between facial beauty and affective evaluations was analyzed, and the correlation between facial beauty and sharp factor was significant. In study 3, we made the new images by morphing and warping the faces: 'average', 'high-ranked', and 'exaggerated'. The participants evaluated the 'high-ranked' face more beautiful than the 'average' face, and the 'exaggerated' face more beautiful than the 'high-ranked' face. The rating of affective words on the faces showed that the 'average' face was related to 'soft' impression, the 'high-ranked' image to 'sharp' impression, and the 'exaggerated' face might have double impression. These results might support the directional hypothesis for the facial beauty.

  • PDF

Recognition of Facial Expressions of Animation Characters Using Dominant Colors and Feature Points (주색상과 특징점을 이용한 애니메이션 캐릭터의 표정인식)

  • Jang, Seok-Woo;Kim, Gye-Young;Na, Hyun-Suk
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.375-384
    • /
    • 2011
  • This paper suggests a method to recognize facial expressions of animation characters by means of dominant colors and feature points. The proposed method defines a simplified mesh model adequate for the animation character and detects its face and facial components by using dominant colors. It also extracts edge-based feature points for each facial component. It then classifies the feature points into corresponding AUs(action units) through neural network, and finally recognizes character facial expressions with the suggested AU specification. Experimental results show that the suggested method can recognize facial expressions of animation characters reliably.

Improved Two-Phase Framework for Facial Emotion Recognition

  • Yoon, Hyunjin;Park, Sangwook;Lee, Yongkwi;Han, Mikyong;Jang, Jong-Hyun
    • ETRI Journal
    • /
    • v.37 no.6
    • /
    • pp.1199-1210
    • /
    • 2015
  • Automatic emotion recognition based on facial cues, such as facial action units (AUs), has received huge attention in the last decade due to its wide variety of applications. Current computer-based automated two-phase facial emotion recognition procedures first detect AUs from input images and then infer target emotions from the detected AUs. However, more robust AU detection and AU-to-emotion mapping methods are required to deal with the error accumulation problem inherent in the multiphase scheme. Motivated by our key observation that a single AU detector does not perform equally well for all AUs, we propose a novel two-phase facial emotion recognition framework, where the presence of AUs is detected by group decisions of multiple AU detectors and a target emotion is inferred from the combined AU detection decisions. Our emotion recognition framework consists of three major components - multiple AU detection, AU detection fusion, and AU-to-emotion mapping. The experimental results on two real-world face databases demonstrate an improved performance over the previous two-phase method using a single AU detector in terms of both AU detection accuracy and correct emotion recognition rate.

Eye and Mouth Images Based Facial Expressions Recognition Using PCA and Template Matching (PCA와 템플릿 정합을 사용한 눈 및 입 영상 기반 얼굴 표정 인식)

  • Woo, Hyo-Jeong;Lee, Seul-Gi;Kim, Dong-Woo;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.7-15
    • /
    • 2014
  • This paper proposed a recognition algorithm of human facial expressions using the PCA and the template matching. Firstly, face image is acquired using the Haar-like feature mask from an input image. The face image is divided into two images. One is the upper image including eye and eyebrow. The other is the lower image including mouth and jaw. The extraction of facial components, such as eye and mouth, begins getting eye image and mouth image. Then an eigenface is produced by the PCA training process with learning images. An eigeneye and an eigenmouth are produced from the eigenface. The eye image is obtained by the template matching the upper image with the eigeneye, and the mouth image is obtained by the template matching the lower image with the eigenmouth. The face recognition uses geometrical properties of the eye and mouth. The simulation results show that the proposed method has superior extraction ratio rather than previous results; the extraction ratio of mouth image is particularly reached to 99%. The face recognition system using the proposed method shows that recognition ratio is greater than 80% about three facial expressions, which are fright, being angered, happiness.

Facial Regions Detection Using the Color and Shape Information in Color Still Images (컬러 정지 영상에서 색상과 모양 정보를 이용한 얼굴 영역 검출)

  • 김영길;한재혁;안재형
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.1
    • /
    • pp.67-74
    • /
    • 2001
  • In this paper, we propose a face detection algorithm using the color and shape information in color still images. The proposed algorithm is only applied to chrominance components(Cb and Cr) in order to reduce the variations of lighting condition in YCbCr color space. Input image is segmented by pixels with skin-tone color and then the segmented mage follows the morphological filtering an geometric correction to eliminate noise and simplify the segmented regions in facial candidate regions. Multiple facial regions in input images can be isolated by connected component labeling. Moreover tilting facial regions can be detected by extraction of second moment-based ellipse features.

  • PDF

Design of a face recognition system for person identificatin using a CCTV camera (폐쇄회로 카메라를 이용한 신분 확인용 실물 얼굴인식시스템의 설계)

  • 이전우;성효경;김성완;최흥문
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.5
    • /
    • pp.50-58
    • /
    • 1998
  • We propose an efficient face recognition system for controllinng the access to the restricted zone using both the face region detectors based on facial symmetry and the extended self-organizing maps (ESOM) which have sensory synapses and descriptive synapses. Based on the visual cues of the facial symmetry, we apply horizontal and vertical projections on elliptic regions detected by GHT(generalized hough transform) to identify all the face regions from the complex background.And we propose an ESOM which can exploit principal components and imitate an elastic similarity matching, to authenticate faces of the enlisted member. In order to cope with changes of facial experession or glasses wearing, etc, the facial descriptions of each member at the time of authentication are simultaneously updated on the discriptive synapses online using the incremental learning of the proposed ESOM. Experimental results prove the feasibility of our approach.

  • PDF

Facial expression recognition based on pleasure and arousal dimensions (쾌 및 각성차원 기반 얼굴 표정인식)

  • 신영숙;최광남
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.4
    • /
    • pp.33-42
    • /
    • 2003
  • This paper presents a new system for facial expression recognition based in dimension model of internal states. The information of facial expression are extracted to the three steps. In the first step, Gabor wavelet representation extracts the edges of face components. In the second step, sparse features of facial expressions are extracted using fuzzy C-means(FCM) clustering algorithm on neutral faces, and in the third step, are extracted using the Dynamic Model(DM) on the expression images. Finally, we show the recognition of facial expression based on the dimension model of internal states using a multi-layer perceptron. The two dimensional structure of emotion shows that it is possible to recognize not only facial expressions related to basic emotions but also expressions of various emotion.

  • PDF

A Study of Evaluation System for Facial Expression Recognition based on LDP (LDP 기반의 얼굴 표정 인식 평가 시스템의 설계 및 구현)

  • Lee, Tae Hwan;Cho, Young Tak;Ahn, Yong Hak;Chae, Ok Sam
    • Convergence Security Journal
    • /
    • v.14 no.7
    • /
    • pp.23-28
    • /
    • 2014
  • This study proposes the design and implementation of the system for a facial expression recognition system. LDP(Local Directional Pattern) feature computes the edge response in a different direction from a pixel with the relationship of neighbor pixels. It is necessary to be estimated that LDP code can represent facial features correctly under various conditions. In this respect, we build the system of facial expression recognition to test LDP performance quickly and the proposed evaluation system consists of six components. we experiment the recognition rate with local micro patterns (LDP, Gabor, LBP) in the proposed evaluation system.

A Study on the Ratio of Human and Dog Facial Components based on Principal Component Analysis (주성분 분석기반 인간과 개의 얼굴 비율 연구)

  • Lee, Young-suk;Ki, Dae Wook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.10
    • /
    • pp.1339-1347
    • /
    • 2020
  • This study is a preliminary study to design a character automation system that considers the facial characteristics of mammals. The experimental data of this study was conducted on dogs (dog breeds) and humans, which were designed to be used in many contents. First, data was extracted from 100 types of dogs and 100 human data. Second, the criteria for measuring the ratio of important parts of the dog and human face were suggested. In addition, a comparative analysis of the face of a dog and a human face is conducted. Lastly, by analyzing the main component(PCA), the most characteristic elements in the faces of dogs and humans were analyzed. As a result, it was confirmed that the length of the face, the size of the eyes, the length of the glabellar, and the length of the glabellar and other parts are important. Through this study, the features of the dog's face that are different from humans are expected to contribute to the animal character automation.

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF