• Title/Summary/Keyword: Facial emotion

Search Result 311, Processing Time 0.023 seconds

Research on Correlation between Facial EMG and Arousal Level (각성수준과 얼굴근전도의 상관성에 대한 연구)

  • 류은경;황민철;변은희;민병찬;김철중
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1998.04a
    • /
    • pp.75-80
    • /
    • 1998
  • 본 연구는 얼굴근전도(facial EMG)가 다양한 시각자극에 의해서 유발된 감성을 평가할 수 있는가를 알아보고자 하였다. 특히 감성의 차원중 각성-이완차원에서의 차이를 얼굴근전도를 이용하여 객관적으로 측정할 수 있는가를 알아보았다. 사용된 자극은 15개의 시각자극이었다 . 각각의 자극은 30초씩 무선적으로 제시되었고, 각 자극의 제시사이마다 120초씩의 휴식기를 두었다. 매 자극제시후 피험자는 제시된 자극에 대해 각성-이완의 정도를 주관적으로 평가하였다. 실험참가자는 25명의 여자대학생이었으며, 왼쪽이마의 추미근(corrugator muscle)과 빰의 관골근(zygomatic muscle)의 얼굴근전도를 측정하였다. 측정된 얼굴근전도에 대해서 절대값을 취해 면적을 구하였다. 최대 각성(the most arousing stimulus), 최소 각성(the least arousing stimulus), 최소이완(the least relaxing stimulus), 최대이완(the most relaxing stimulus)이라고 피험자들마다 주관적으로 평가한 가극에 대한 얼굴근전도를 비교.분석하였다. 그 결과 이마의 추미근이 각성과 이완감성의 차이를 변별할 수 있었다. 즉 각성감성을 느낄수록 이마의 추미근의 활동이 증가함을 보였다. 또한 최대각성감성을 느낄때 이마의 추미근의 활동이 증가함을 보였다. 결론적으로, 얼굴근전도가 다양한 시각자극에 의해 유발된 감성의 각성-이완차원을 측정할 수 있는 좋은 지표가 될수 있음을 나타낸다.

  • PDF

A Study on Efficient Facial Expression Recognition System for Customer Satisfaction Feedback (고객만족도 피드백을 위한 효율적인 얼굴감정 인식시스템에 대한 연구)

  • Kang, Min-Sik
    • Convergence Security Journal
    • /
    • v.12 no.4
    • /
    • pp.41-47
    • /
    • 2012
  • For competitiveness of national B2C (Business to Customer) service industry, improvement of process and analysis focused on customer and change of service system are needed. In other words, a business and an organization should deduce and provide what kind of services customers want. Then, evaluate customers' satisfaction and improve the service quality. To achieve this goal, accurate feedbacks from customers play an important role; however, there are not quantitative and standard systems a lot in nation. Recently, the researches about ICT (Information and Communication Technology) that can recognize emotion of human being are on the increase. The facial expression recognition among them is known as most efficient and natural human interface. This research analyzes about more efficient facial expression recognition and suggests a customer satisfaction feedback system using that.

Face and Its Components Extraction of Animation Characters Based on Dominant Colors (주색상 기반의 애니메이션 캐릭터 얼굴과 구성요소 검출)

  • Jang, Seok-Woo;Shin, Hyun-Min;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.93-100
    • /
    • 2011
  • The necessity of research on extracting information of face and facial components in animation characters have been increasing since they can effectively express the emotion and personality of characters. In this paper, we introduce a method to extract face and facial components of animation characters by defining a mesh model adequate for characters and by using dominant colors. The suggested algorithm first generates a mesh model for animation characters, and extracts dominant colors for face and facial components by adapting the mesh model to the face of a model character. Then, using the dominant colors, we extract candidate areas of the face and facial components from input images and verify if the extracted areas are real face or facial components by means of color similarity measure. The experimental results show that our method can reliably detect face and facial components of animation characters.

Fundamental Studies on Human Sciences by Facial Form Analysis - Based on Unit Fluid Model of Essence, Qi energy, Emotion, Blood - (안면형상연구의 인간과학적 기초 연구 - 정기신혈(精氣神血)의 유체역학적(流體力學的) 해석을 중심으로 -)

  • Kim, Jong-Won;Lee, In-Seon;Kim, Kyu-Kon;Lee, Yong-Tae;Kim, Kyung-Chul;Eom, Hyun-Sup;Chi, Gyoo-Yong
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.22 no.5
    • /
    • pp.1057-1061
    • /
    • 2008
  • For the purpose of investigating the reasonable logics contained in physiognomy of east and old western medicine. hypothetical researches based on hydromechanics theory were performed concerning facial types of form and pathologic features, especially 4 types of Dr. Jisan-Essence, Qi energy. Emotional Activity and Blood(EQAB). In order to infer the functional relation between facial type forming and EQAB factors, EQAB were supposed as fluid grounded on their continual flowing or periodical change and pressure effect from its congestion. and a premise that there's a linear corresponding relationship between the appearance of organ and its physical conditions of its inner vessels is formed too. Through this work, the unit fluid model(UFM) of Essence can be assumed as circle shape formed by the high viscosity and surface tension, the UFM model of Qi energy as quadrangular shape by the scattering features to outer four directions, and the UFM of emotional activity as inverted triangular shape by the flippant and uprising features, and the UFM of blood as ellipsoid triangle by the heavy and descending features in spite of circulation. The shapes made from each UFM are reproduced in the process of human development and manifest respective facial shape through the self-reproduction method like fractal theory in the last. Conclusively. it is said that the facial form analysis method like EQAB type theory can be the useful methodology to understand the human pathological and physiological features in view of hydromechanics.

Facial Expression Recognition by Combining Adaboost and Neural Network Algorithms (에이다부스트와 신경망 조합을 이용한 표정인식)

  • Hong, Yong-Hee;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.6
    • /
    • pp.806-813
    • /
    • 2010
  • Human facial expression shows human's emotion most exactly, so it can be used as the most efficient tool for delivering human's intention to computer. For fast and exact recognition of human's facial expression on a 2D image, this paper proposes a new method which integrates an Discrete Adaboost classification algorithm and a neural network based recognition algorithm. In the first step, Adaboost algorithm finds the position and size of a face in the input image. Second, input detected face image into 5 Adaboost strong classifiers which have been trained for each facial expressions. Finally, neural network based recognition algorithm which has been trained with the outputs of Adaboost strong classifiers determines final facial expression result. The proposed algorithm guarantees the realtime and enhanced accuracy by utilizing fastness and accuracy of Adaboost classification algorithm and reliability of neural network based recognition algorithm. In this paper, the proposed algorithm recognizes five facial expressions such as neutral, happiness, sadness, anger and surprise and achieves 86~95% of accuracy depending on the expression types in real time.

A Generation Methodology of Facial Expressions for Avatar Communications (아바타 통신에서의 얼굴 표정의 생성 방법)

  • Kim Jin-Yong;Yoo Jae-Hwi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.3 s.35
    • /
    • pp.55-64
    • /
    • 2005
  • The avatar can be used as an auxiliary methodology of text and image communications in cyber space. An intelligent communication method can also be utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real or compressed pictures. In this paper. for supporting the action of arm and leg gestures, a method of generating the facial expressions that can represent sender's emotions is provided. The facial expression can be represented by Action Unit(AU), in this paper we suggest the methodology of finding appropriate AUs in avatar models that have various shape and structure. And, to maximize the efficiency of emotional expressions, a comic-style facial model having only eyebrows, eyes, nose, and mouth is employed. Then generation of facial emotion animation with the parameters is also investigated.

  • PDF

Crossmodal Perception of Mismatched Emotional Expressions by Embodied Agents (에이전트의 표정과 목소리 정서의 교차양상지각)

  • Cho, Yu-Suk;Suk, Ji-He;Han, Kwang-Hee
    • Science of Emotion and Sensibility
    • /
    • v.12 no.3
    • /
    • pp.267-278
    • /
    • 2009
  • Today an embodied agent generates a large amount of interest because of its vital role for human-human interactions and human-computer interactions in virtual world. A number of researchers have found that we can recognize and distinguish between various emotions expressed by an embodied agent. In addition many studies found that we respond to simulated emotions in a similar way to human emotion. This study investigates interpretation of mismatched emotions expressed by an embodied agent (e.g. a happy face with a sad voice); whether audio-visual channel integration occurs or one channel dominates when participants judge the emotion. The study employed a 4 (visual: happy, sad, warm, cold) $\times$ 4 (audio: happy, sad, warm, cold) within-subjects repeated measure design. The results suggest that people perceive emotions not depending on just one channel but depending on both channels. Additionally facial expression (happy face vs. sad face) makes a difference in influence of two channels; Audio channel has more influence in interpretation of emotions when facial expression is happy. People were able to feel other emotion which was not expressed by face or voice from mismatched emotional expressions, so there is a possibility that we may express various and delicate emotions with embodied agent by using only several kinds of emotions.

  • PDF

Study on Facial Expression Factors as Emotional Interaction Design Factors (감성적 인터랙션 디자인 요소로서의 표정 요소에 관한 연구)

  • Heo, Seong-Cheol
    • Science of Emotion and Sensibility
    • /
    • v.17 no.4
    • /
    • pp.61-70
    • /
    • 2014
  • Verbal communication has limits in the interaction between robot and man, and therefore nonverbal communication is required for realizing smoother and more efficient communication and even the emotional expression of the robot. This study derived 7 pieces of nonverbal information based on shopping behavior using the robot designed to support shopping, selected facial expression as the element of the nonverbal information derived, and coded face components through 2D analysis. Also, this study analyzed the significance of the expression of nonverbal information using 3D animation that combines the codes of face components. The analysis showed that the proposed expression method for nonverbal information manifested high level of significance, suggesting the potential of this study as the base line data for the research on nonverbal information. However, the case of 'embarrassment' showed limits in applying the coded face components to shape and requires more systematic studies.

Dynamics of Facial Subcutaneous Blood Flow Recovery in Post-stress Period

  • Sohn, Jin-Hun;Estate M. Sokhadze;Lee, Kyung-Hwa;Lee, Jong-Mi;Park, Mi-Kyung;Park, Ji-Yeon
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.11a
    • /
    • pp.62-68
    • /
    • 2000
  • The aim of the study was to compare effects of music and white noise on the recovery of facial blood flow parameters after stressful visual stimulation. Twenty-nine subjects participated in the experiment. Three visual stimulation sessions with aversive slides (the IAPS, disgust category) were followed by subjectively "pleasant" (in the first session), "sad" music (in the second ), and white noise (in the third ). Order of sessions was counterbalanced. Blood flow parameters (peak blood flow, blood flow velocity, blood volume) were recorded by Laser Doppler single-crystal system (LASERFLO BPM 403A) interfaced through BIOPAC 100WS with AcqKnowledge software (v.3.5) and analyzed in off-line mode. Aversive visual stimulation itself decreased blood flow and velocity in all 3 sessions. Both "pleasant" and "sad" music led to the restoration of baseline levels in all blood flow parameters, while noise did not enhance recovery process. Music on post-stress recovery had significant change in peak blood flow and blood flow velocity, but not in blood volume measures. Pleasant music had bigger effects on post-stress recovery in peak blood flow and flow velocity than white noise. It reveals that music exerted positive modulatory effects on facial vascular activity measures during recovery from negative emotional state elicited by stressful slides. Results partially support the undoing hypothesis of Levenson (1994), which states that positive emotions may facilitate process of recovery from negative emotions.

  • PDF

The Influence of Background Color on Perceiving Facial Expression (배경색채가 얼굴 표정에서 전달되는 감성에 미치는 영향)

  • Son, Ho-Won;Choe, Da-Mi;Seok, Hyeon-Jeong
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.05a
    • /
    • pp.51-54
    • /
    • 2009
  • 다양한 미디어에서 인물과 색채는 가장 중심적인 요소로서 활용되므로 인물의 표정에서 느껴지는 감성과 색채 자극에 대한 감성적 반응에 연구는 심리학 분야에서 각각 심도 있게 연구되어왔다. 본 연구에서는 감성 자극물로서의 얼굴 표정과 색채가 상호 작용을 하였을 때 이에 대한 감성적 반응에 대하여 조사하는데 그 목적이 있다. 즉, 인물의 표정과 배경 색상을 배치하였을 때 인물의 표정에서 느껴지는 감성이 어떻게 변하는지에 관한 실험 연구를 진행하여 이를 미디어에서 활용할 수 있는 방안을 제시하고자 한다. 60명의 피실험자들을 대상으로 진행한 실험연구에서는 Ekman의 7가지의 universal facial expression 중 증오(Contempt)의 표정을 제외한 분노(Anger), 공포(Fear), 역겨움(Disgusting), 기쁨(Happiness), 슬픔(Sadness), 놀람(Surprising) 등의 6가지의 표정의 이미지를 인물의 표정으로 활용하였다. 그리고, 배경 색채로서 빨강, 노랑, 파랑, 초록의 색상들을 기준으로 각각 밝은(light), 선명한(vivid), 둔탁한(dull), 그리고 어두운(dark) 등의 4 가지 톤(tone)의 영역에서 색채를 추출하였고, 추가로 무채색의 5 가지 색상이 적용되었다. 총 120 장(5 가지 얼굴표정 ${\times}$ 20 가지 색채)의 표정에서 나타나는 감성적 표현을 평가하도록 하였으며, 각각의 피실험자는 무작위 순위로 60개의 자극물을 평가하였다. 실험에서 측정된 데이터는 각 표정별로 분류되었으며 배경에 적용된 색채에 따라 얼굴 표현에서 나타나는 감성적 표현이 다름을 보여주었다. 특히 색채에 대한 감성적 반응에 대한 기존연구에서 제시하고 있는 자료를 토대로 색채와 얼굴표정의 감성이 상반되는 경우, 얼굴표정에서 나타나는 감성적 표현이 약하게 전달되었음을 알 수 있었으며, 이는 부정적인 얼굴표정일수록 더 두드러지는 것으로 나타났다. 이러한 현상은 색상과 톤의 경우 공통적으로 나타나는 현상으로서 광고 및 시각 디자인 분야의 실무에서 활용될 수 있다.

  • PDF