• Title/Summary/Keyword: Facial Emotion Expression

Search Result 202, Processing Time 0.026 seconds

An Intelligent Emotion Recognition Model Using Facial and Bodily Expressions

  • Jae Kyeong Kim;Won Kuk Park;Il Young Choi
    • Asia pacific journal of information systems
    • /
    • v.27 no.1
    • /
    • pp.38-53
    • /
    • 2017
  • As sensor technologies and image processing technologies make collecting information on users' behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users' emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.

Impact Analysis of nonverbal multimodals for recognition of emotion expressed virtual humans (가상 인간의 감정 표현 인식을 위한 비언어적 다중모달 영향 분석)

  • Kim, Jin Ok
    • Journal of Internet Computing and Services
    • /
    • v.13 no.5
    • /
    • pp.9-19
    • /
    • 2012
  • Virtual human used as HCI in digital contents expresses his various emotions across modalities like facial expression and body posture. However, few studies considered combinations of such nonverbal multimodal in emotion perception. Computational engine models have to consider how a combination of nonverbal modal like facial expression and body posture will be perceived by users to implement emotional virtual human, This paper proposes the impacts of nonverbal multimodal in design of emotion expressed virtual human. First, the relative impacts are analysed between different modals by exploring emotion recognition of modalities for virtual human. Then, experiment evaluates the contribution of the facial and postural congruent expressions to recognize basic emotion categories, as well as the valence and activation dimensions. Measurements are carried out to the impact of incongruent expressions of multimodal on the recognition of superposed emotions which are known to be frequent in everyday life. Experimental results show that the congruence of facial and postural expression of virtual human facilitates perception of emotion categories and categorical recognition is influenced by the facial expression modality, furthermore, postural modality are preferred to establish a judgement about level of activation dimension. These results will be used to implementation of animation engine system and behavior syncronization for emotion expressed virtual human.

Facial EMG pattern evoked by pleasant and unpleasant odor stimulus

  • Yamada, Hiroshi;Kaneki, Noriaki;Shimada, Koji;Okii, Hironori
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2002.05a
    • /
    • pp.11-15
    • /
    • 2002
  • Activities of venter frontalis, corrugator, levator labii superioris and greater zygomatic muscles were measured for five male subjects while they made pleasant, unpleasant and neutral facial expressions, and while they were presented pleasant, disgusting, and neutral odors. Pleasant expression and odor activated zygomatic muscles while unpleasant expression and odor increased corrugator muscle activity.

  • PDF

Character's facial expression realization program with emotional state dimensions (내적 정서 상태 차원에 근거한 캐릭터 표정생성 프로그램 구현)

  • Ko, Hae-Young;Lee, Jae-Sik;Kim, Jae-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.3
    • /
    • pp.438-444
    • /
    • 2008
  • In this study, we propose a automatic facial expression generation tool which can generate various facial expressions efficiently for animation characters expressing various emotion. For this purpose, 9 types of facial expressions were defined by adopting Russell's pleasure-displeasure and arousal-sleep coordinate values of infernal states. And by rendering specific coordinate values consisted of the two basic dimensions of emotion(i.e., dimensions of pleasure-displeasure and arousal-sleep) and their combination, the proposed automatic facial expression generation tool could yield various facial expressions which reflected subtle changes in characters' emotion. It is expected that the tool could provide useful method of generating character face with various emotion.

Dynamic Facial Expression of Fuzzy Modeling Using Probability of Emotion (감정확률을 이용한 동적 얼굴표정의 퍼지 모델링)

  • Kang, Hyo-Seok;Baek, Jae-Ho;Kim, Eun-Tai;Park, Mignon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.1-5
    • /
    • 2009
  • This paper suggests to apply mirror-reflected method based 2D emotion recognition database to 3D application. Also, it makes facial expression of fuzzy modeling using probability of emotion. Suggested facial expression function applies fuzzy theory to 3 basic movement for facial expressions. This method applies 3D application to feature vector for emotion recognition from 2D application using mirror-reflected multi-image. Thus, we can have model based on fuzzy nonlinear facial expression of a 2D model for a real model. We use average values about probability of 6 basic expressions such as happy, sad, disgust, angry, surprise and fear. Furthermore, dynimic facial expressions are made via fuzzy modelling. This paper compares and analyzes feature vectors of real model with 3D human-like avatar.

The Effects of the Emotion Regulation Strategy to the Disgust Stimulus on Facial Expression and Emotional Experience (혐오자극에 대한 정서조절전략이 얼굴표정 및 정서경험에 미치는 영향)

  • Jang, Sung-Lee;Lee, Jang-Han
    • Korean Journal of Health Psychology
    • /
    • v.15 no.3
    • /
    • pp.483-498
    • /
    • 2010
  • This study is to examine the effects of emotion regulation strategies in facial expressions and emotional experiences, based on the facial expressions of groups, using antecedent- and response- focused regulation. 50 female undergraduate students were instructed to use different emotion regulation strategies during the viewing of a disgust inducing film. While watching, their facial expressions and emotional experiences were measured. As a result, participants showed the highest frequency of action units related to disgust in the EG(expression group), and they reported in the following order of DG(expressive dissonance group), CG(cognitive reappraisal group), and SG(expressive suppression group). Also, the upper region of the face reflected real emotions. In this region, the frequency of action units related to disgust were lower in the CG than in the EG or DG. The results of the PANAS indicated the largest decrease of positive emotions reported in the DG, but an increase of positive emotions reported in the CG. This study suggests that cognitive reappraisal to an event is a more functional emotion regulation strategy compared to other strategies related to facial expression and emotional experience that affect emotion regulation strategies.

Dynamic Emotion Model in 3D Affect Space for a Mascot-Type Facial Robot (3차원 정서 공간에서 마스코트 형 얼굴 로봇에 적용 가능한 동적 감정 모델)

  • Park, Jeong-Woo;Lee, Hui-Sung;Jo, Su-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.3
    • /
    • pp.282-287
    • /
    • 2007
  • Humanoid and android robots are emerging as a trend shifts from industrial robot to personal robot. So human-robot interaction will increase. Ultimate objective of humanoid and android would be a robot like a human. In this aspect, implementation of robot's facial expression is necessary in making a human-like robot. This paper proposes a dynamic emotion model for a mascot-type robot to display similar facial and more recognizable expressions.

  • PDF

Study on the Relationship Between 12Meridians Flow and Facial Expressions by Emotion (감정에 따른 얼굴 표정변화와 12경락(經絡) 흐름의 상관성 연구)

  • Park, Yu-Jin;Moon, Ju-Ho;Choi, Su-Jin;Shin, Seon-Mi;Kim, Ki-Tae;Ko, Heung
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.26 no.2
    • /
    • pp.253-258
    • /
    • 2012
  • Facial expression was an important communication methods. In oriental medicine, according to the emotion the face has changed shape and difference occurs in physiology and pathology. To verify such a theory, we studied the correlation between emotional facial expressions and meridian and collateral flow. The facial region divided by meridian, outer brow was Gallbladder meridian, inner brow was Bladder meridian, medial canthus was Bladder meridian, lateral canthus was Gallbladder meridian, upper eyelid was Bladder meridian, lower eyelid was Stomach meridian, central cheeks was Stomach meridian, lateral cheeks was Small intestine meridian, upper and lower lips, lip corner, chin were Small and Large intestine meridian. Meridian and collateral associated with happiness was six. This proves happiness is a high importance on facial expression. Meridian and collateral associated with anger was five. Meridian and Collateral associated with fear and sadness was four. This shows fear and sadness are a low importance on facial expression than different emotion. Based on yang meridian which originally descending flow in the body, the ratio of anterograde and retrograde were happiness 3:4, angry 2:5, sadness 5:3, fear 4:1. Based on face of the meridian flow, the ratio of anterograde and retrograde were happiness 5:2, angry 3:4, sadness 3:5, fear 4:1. We found out that practical meridian and collateral flow change by emotion does not correspond to the expected meridian and collateral flow change by emotion.

Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법)

  • Joo, Jong-Tae;Jang, In-Hun;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.

Improved Two-Phase Framework for Facial Emotion Recognition

  • Yoon, Hyunjin;Park, Sangwook;Lee, Yongkwi;Han, Mikyong;Jang, Jong-Hyun
    • ETRI Journal
    • /
    • v.37 no.6
    • /
    • pp.1199-1210
    • /
    • 2015
  • Automatic emotion recognition based on facial cues, such as facial action units (AUs), has received huge attention in the last decade due to its wide variety of applications. Current computer-based automated two-phase facial emotion recognition procedures first detect AUs from input images and then infer target emotions from the detected AUs. However, more robust AU detection and AU-to-emotion mapping methods are required to deal with the error accumulation problem inherent in the multiphase scheme. Motivated by our key observation that a single AU detector does not perform equally well for all AUs, we propose a novel two-phase facial emotion recognition framework, where the presence of AUs is detected by group decisions of multiple AU detectors and a target emotion is inferred from the combined AU detection decisions. Our emotion recognition framework consists of three major components - multiple AU detection, AU detection fusion, and AU-to-emotion mapping. The experimental results on two real-world face databases demonstrate an improved performance over the previous two-phase method using a single AU detector in terms of both AU detection accuracy and correct emotion recognition rate.