• Title/Summary/Keyword: Facial emotion

Search Result 314, Processing Time 0.029 seconds

Analysis and Synthesis of Facial Expression using Base Faces (기준얼굴을 이용한 얼굴표정 분석 및 합성)

  • Park, Moon-Ho;Ko, Hee-Dong;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.8
    • /
    • pp.827-833
    • /
    • 2000
  • Facial expression is an effective tool to express human emotion. In this paper, a facial expression analysis method based on the base faces and their blending ratio is proposed. The seven base faces were chosen as axes describing and analyzing arbitrary facial expression. We set up seven facial expressions such as, surprise, fear, anger, disgust, happiness, sadness, and expressionless as base faces. Facial expression was built by fitting generic 3D facial model to facial image. Two comparable methods, Genetic Algorithms and Simulated Annealing were used to search the blending ratio of base faces. The usefulness of the proposed method for facial expression analysis was proved by the facial expression synthesis results.

  • PDF

Study of expression in virtual character of facial smile by emotion recognition (감성인식에 따른 가상 캐릭터의 미소 표정변화에 관한 연구)

  • Lee, Dong-Yeop
    • Cartoon and Animation Studies
    • /
    • s.33
    • /
    • pp.383-402
    • /
    • 2013
  • In this study, we apply the facial Facial Action Coding System for coding the muscular system anatomical approach facial expressions to be displayed in response to a change in sensitivity. To verify by applying the virtual character the Duchenne smile to the original. I extracted the Duchenne smile by inducing experiment of emotion (man 2, woman 2) and the movie theater department students trained for the experiment. Based on the expression that has been extracted, I collect the data of the facial muscles. Calculates the frequency of expression of the face and other parts of the body muscles around the mouth and lips, to be applied to the virtual character of the data. Orbicularis muscle to contract end of lips due to shrinkage of the Zygomatic Major is a upward movement, cheek goes up, the movement of the muscles, facial expressions appear the outer eyelid under the eye goes up with a look of smile. Muscle movement of large muscle and surrounding Zygomatic Major is observed together (AU9) muscles around the nose and (AU25, AU26, AU27) muscles around the mouth associated with openness. Duchen smile occurred in the form of Orbicularis Oculi and Zygomatic Major moves at the same time. Based on this, by separating the orbicularis muscle that is displayed in the form of laughter and sympathy to emotional feelings and viable large muscle by the will of the person, by applying to the character of the virtual, and expression of human I try to examine expression of the virtual character's ability to distinguish.

Facial Expression Training Digital Therapeutics for Autistic Children (자폐아를 위한 표정 훈련 디지털 치료제)

  • Jiyeon Park;Kyoung Won Lee;Seong Yong Ohm
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.581-586
    • /
    • 2023
  • Recently a drama that features a lawyer with autism spectrum disorder has attracted a lot of attention, raising interest in the difficulties faced by people with autism spectrum disorders. If the Autism spectrum gets detected early and proper education and treatment, the prognosis can be improved, so the development of the treatment is urgently needed. Drugs currently used to treat autism spectrum often have side effects, so Digital Therapeutics that have no side effects and can be supplied in large quantities are drawing attention. In this paper, we introduce 'AEmotion', an application and a Digital Therapeutic that provides emotion and facial expression learning for toddlers with an autism spectrum disorder. This system is developed as an application for smartphones to increase interest in training autistic children and to test easily. Using machine learning, this system consists of three main stages: an 'emotion learning' step to learn emotions with facial expression cards, an 'emotion identification' step to check if the user understood emotions and facial expressions properly, and an 'expression training' step to make appropriate facial expressions. Through this system, it is expected that it will help autistic toddlers who have difficulties with social interactions by having problems recognizing facial expressions and emotions.

Difference in visual attention during the assessment of facial attractiveness and trustworthiness (얼굴 매력도와 신뢰성 평가에서 시각적 주의의 차이)

  • Sung, Young-Shin;Cho, Kyung-Jin;Kim, Do-Yeon;Kim, Hack-Jin
    • Science of Emotion and Sensibility
    • /
    • v.13 no.3
    • /
    • pp.533-540
    • /
    • 2010
  • This study was designed to examine the difference in visual attention between the evaluations of facial attractiveness and facial trustworthiness, both of which may be the two most fundamental social evaluation for forming first impressions under various types of social interactions. In study 1, participants were asked to evaluate the attractiveness and trustworthiness of 40 new faces while their gaze directions being recorded using an eye-tracker. The analysis revealed that participants spent significantly longer gaze fixation time while examining certain facial features such as eyes and nose during the evaluation of facial trustworthiness, as compared to facial attractiveness. In study 2, participants performed the same face evaluation tasks, except that a word was briefly displayed on a certain facial feature in each face trial, which were then followed by unexpected recall tests of the previously viewed words. The analysis demonstrated that the recognition rate of the words that had been presented on the nose was significantly higher for the task of facial trustworthiness vs. facial attractiveness evaluation. These findings suggest that the evaluation of facial trustworthiness may be distinguished by that of facial attractiveness in terms of the allocation of attentional resources.

  • PDF

Design and Implementation of Walking Motions Applied with Player's Emotion Factors According to Variable Statistics of RPG Game Character (RPG게임캐릭터의 능력치변화량에 따라 감정요소가 적용된 걷기동작 구현)

  • Kang, Hyun-Ah;Kim, Mi-Jin
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.5
    • /
    • pp.63-71
    • /
    • 2007
  • From several commercialized games the technique of changing facial expressions is imported, and a design method of a game character for the player's empathy is expected to be diversified in the future. In this paper, as a design method of a game character for the player's empathy, this makes walking motion for the game character applied with 'human-emotion' factors as statistics variation of a game character in RPG genre. After this paper implements analyzed emotions of human facial expression and walking motions applied with emotion in examples of character animation theory, this paper divides walking motion applied with human-emotion factors into 8 types through relationship to statistics factors in RPG genre. And then these are applied to a knight character, which has the most similarity with human's physical feature of the game characters in RPG genre, and makes walking motion as variable statistics. As a game player controls the game character applied with 'human-emotion' factors, the effect of the player's empathy about the game character becomes higher, and the level of immersion in game play is also expected to increase.

A Generation Method of Comic Facial Expressions for Intelligent Avatar Communications (지적 아바타 통신을 위한 코믹한 얼굴 표정의 생성법)

  • ;;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 2000.11d
    • /
    • pp.227-230
    • /
    • 2000
  • The sign-language can be used as an auxiliary communication means between avatars of different languages in cyberspace. At that time, an intelligent communication method can also be utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real pictures. In this paper, a method of generating the facial gesture CG animation on different avatar models is provided. At first, to edit emotional expressions efficiently, a comic-style facial model having only eyebrows, eyes, nose, and mouth is employed. Then generation of facial emotion animation with the parameters is also investigated. Experimental results show a possibility that the method could be used for the intelligent avatar communications between Korean and Japanese.

  • PDF

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2E
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.

Text-driven Speech Animation with Emotion Control

  • Chae, Wonseok;Kim, Yejin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3473-3487
    • /
    • 2020
  • In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.

Emotion Recognition Method based on Feature and Decision Fusion using Speech Signal and Facial Image (음성 신호와 얼굴 영상을 이용한 특징 및 결정 융합 기반 감정 인식 방법)

  • Joo, Jong-Tae;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.11-14
    • /
    • 2007
  • 인간과 컴퓨터간의 상호교류 하는데 있어서 감정 인식은 필수라 하겠다. 그래서 본 논문에서는 음성 신호 및 얼굴 영상을 BL(Bayesian Learning)과 PCA(Principal Component Analysis)에 적용하여 5가지 감정 (Normal, Happy, Sad, Anger, Surprise) 으로 패턴 분류하였다. 그리고 각각 신호의 단점을 보완하고 인식률을 높이기 위해 결정 융합 방법과 특징 융합 방법을 이용하여 감정융합을 실행하였다. 결정 융합 방법은 각각 인식 시스템을 통해 얻어진 인식 결과 값을 퍼지 소속 함수에 적용하여 감정 융합하였으며, 특정 융합 방법은 SFS(Sequential Forward Selection)특정 선택 방법을 통해 우수한 특정들을 선택한 후 MLP(Multi Layer Perceptron) 기반 신경망(Neural Networks)에 적용하여 감정 융합을 실행하였다.

  • PDF

Facial Expression Synthesis Using 3D Facial Modeling (3차원 얼굴 모델 링 을 이 용한 표정 합성)

  • 심연숙;변혜란;정찬섭
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1998.11a
    • /
    • pp.40-44
    • /
    • 1998
  • 사용자에 게 친근감 있는 인터페이스를 제공하기 위해 자연스러운 얼굴 애니메이션에 대한 연구가 활발히 진행 중이다.[5][6] 본 논문에서는 자연스러운 얼굴의 표정 합성을 위한 애니메이션 방법 을 제안하였다. 특정한 사람을 모델로 한 얼굴 애니메이션을 위하여 우선 3차원 메쉬로 구성된 일반 모델(generic model)을 특정 사람에게 정합 하여 특정인의 3차원 얼굴 모델을 얻을 수 있다 본 논문에서는 한국인의 자연스러운 얼굴 표정합성을 위하여, 한국인의 표준얼굴에 관한 연구결과를 토대로 한국인 얼굴의 특징을 반영한 일반모델을 만들고 이를 이용하여 특정인의 3차원 얼굴 모델을 얻을 수 있도록 하였다. 실제 얼굴의 근육 및 피부 조직 등 해부학적 구조에 기반 한 표정 합성방법을 사용하여 현실감 있고 자연스러운 얼굴 애니메이션이 이루어질 수 있도록 하였다.

  • PDF