• Title/Summary/Keyword: Facial Emotion Expression

Search Result 202, Processing Time 0.031 seconds

Classification of Three Different Emotion by Physiological Parameters

  • Jang, Eun-Hye;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.271-279
    • /
    • 2012
  • Objective: This study classified three different emotional states(boredom, pain, and surprise) using physiological signals. Background: Emotion recognition studies have tried to recognize human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 122 college students participated in this experiment. Three different emotional stimuli were presented to participants and physiological signals, i.e., EDA(Electrodermal Activity), SKT(Skin Temperature), PPG(Photoplethysmogram), and ECG (Electrocardiogram) were measured for 1 minute as baseline and for 1~1.5 minutes during emotional state. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 features were extracted from these signals. Statistical analysis for emotion classification were done by DFA(discriminant function analysis) (SPSS 15.0) by using the difference values subtracting baseline values from the emotional state. Results: The result showed that physiological responses during emotional states were significantly differed as compared to during baseline. Also, an accuracy rate of emotion classification was 84.7%. Conclusion: Our study have identified that emotions were classified by various physiological signals. However, future study is needed to obtain additional signals from other modalities such as facial expression, face temperature, or voice to improve classification rate and to examine the stability and reliability of this result compare with accuracy of emotion classification using other algorithms. Application: This could help emotion recognition studies lead to better chance to recognize various human emotions by using physiological signals as well as is able to be applied on human-computer interaction system for emotion recognition. Also, it can be useful in developing an emotion theory, or profiling emotion-specific physiological responses as well as establishing the basis for emotion recognition system in human-computer interaction.

Crossmodal Perception of Mismatched Emotional Expressions by Embodied Agents (에이전트의 표정과 목소리 정서의 교차양상지각)

  • Cho, Yu-Suk;Suk, Ji-He;Han, Kwang-Hee
    • Science of Emotion and Sensibility
    • /
    • v.12 no.3
    • /
    • pp.267-278
    • /
    • 2009
  • Today an embodied agent generates a large amount of interest because of its vital role for human-human interactions and human-computer interactions in virtual world. A number of researchers have found that we can recognize and distinguish between various emotions expressed by an embodied agent. In addition many studies found that we respond to simulated emotions in a similar way to human emotion. This study investigates interpretation of mismatched emotions expressed by an embodied agent (e.g. a happy face with a sad voice); whether audio-visual channel integration occurs or one channel dominates when participants judge the emotion. The study employed a 4 (visual: happy, sad, warm, cold) $\times$ 4 (audio: happy, sad, warm, cold) within-subjects repeated measure design. The results suggest that people perceive emotions not depending on just one channel but depending on both channels. Additionally facial expression (happy face vs. sad face) makes a difference in influence of two channels; Audio channel has more influence in interpretation of emotions when facial expression is happy. People were able to feel other emotion which was not expressed by face or voice from mismatched emotional expressions, so there is a possibility that we may express various and delicate emotions with embodied agent by using only several kinds of emotions.

  • PDF

Attentional Bias to Emotional Stimuli and Effects of Anxiety on the Bias in Neurotypical Adults and Adolescents

  • Mihee Kim;Jejoong Kim;So-Yeon Kim
    • Science of Emotion and Sensibility
    • /
    • v.25 no.4
    • /
    • pp.107-118
    • /
    • 2022
  • Human can rapidly detect and deal with dangerous elements in their environment, and they generally manifest as attentional bias toward threat. Past studies have reported that this attentional bias is affected by anxiety level. Other studies, however, have argued that children and adolescents show attentional bias to threatening stimuli, regardless of their anxiety levels. Few studies directly have compared the two age groups in terms of attentional bias to threat, and furthermore, most previous studies have focused on attentional capture and the early stages of attention, without investigating further attentional holding by the stimuli. In this study, we investigated both attentional bias patterns (attentional capture and holding) with respect to negative emotional stimulus in neurotypical adults and adolescents. The effects of anxiety level on attentional bias were also examined. The results obtained for adult participants showed that abrupt onset of a distractor delayed attentional capture to the target, regardless of distractor type (angry or neutral faces), while it had no effect on attention holding. In adolescents, on the other hand, only the angry face distractor resulted in longer reaction time for detecting a target. Regarding anxiety, state anxiety revealed a significant positive correlation with attentional capture to a face distractor in adult participants but not in adolescents. Overall, this is the first study to investigate developmental tendencies of attentional bias to negative facial emotion in both adults and adolescents, providing novel evidence on attentional bias to threats at different ages. Our results can be applied to understanding the attentional mechanisms in people with emotion-related developmental disorders, as well as typical development.

Effect of Stereotype Threat on Spatial Working Memory and Emotion Recognition in Korean elderly (노화에 대한 고정관념 위협이 노인의 공간 작업기억 및 정서인식에 미치는 영향)

  • Lee, Kyoung eun;Lee, Wanjeoung;Choi, Kee-hong;Kim, Hyun Taek;Choi, June-seek
    • 한국노년학
    • /
    • v.36 no.4
    • /
    • pp.1109-1124
    • /
    • 2016
  • We examined the effect of stereotype threat (STT) on spatial working memory and facial emotion recognition in Korean elderly. In addition, we investigated the role of expected moderator such as self-perception of aging. Seventeen seniors (male=7) received basic cognitive tests including K-WMS-IV, MMSE and answered self-report questionnaires including self-perception of aging, anxiety of aging, attitude toward aging and age identity on the first visit. On the second visit, they were exposed to negative stereotype by reading a script detailing cognitive decline related to aging while a control group was exposed to a neutral content. Following the exposure, they were tested on a spatial-working memory task (Corsi-block tapping task) and emotion recognition task (facial expression identification task). The results showed that the seniors exposed to STT showed significantly lower performance on emotion recognition task (p < .05) (i.e., especially on the more difficult facial stimuli). In addition, there was a significant interaction between STT and self-perception of aging (p< .05), indicating that those who have positive self-perception of aging did not show impairment in emotion recognition task and difficult spatial working memory task under STT. On the other hand, those with negative self-perception of aging showed impaired performance under STT. Taken together, the current study suggests that being exposed to STT could negatively influence cognitive and emotional functioning of elderly. Interestingly, having a positive self-perception of aging could protect the underperformance caused by STT.

Emotion Based Gesture Animation Generation Mobile System (감정 기반 모바일 손제스쳐 애니메이션 제작 시스템)

  • Lee, Jung-Suk;Byun, Hae-Won
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.129-134
    • /
    • 2009
  • Recently, percentage of people who use SMS service is increasing. However, it is difficult to express own complicated emotion with text and emoticon of exited SMS service. This paper focuses on that point and practical uses character animation to express emotion and nuance correctly, funny. Also this paper suggests emotion based gesture animation generation system that use character's facial expression and gesture to delivery emotion excitably and clearly than only speaking. Michel[1] investigated interview movies of a person whose gesturing style they wish to animate and suggested gesture generation graph for stylized gesture animation. In this paper, we make focus to analyze and abstracted emotional gestures of Disney animation characters and did 3D modeling of these emotional gestures expanding Michel[1]'s research. To express emotion of person, suggests a emotion gesture generation graph that reflects emotion flow graph express emotion flow for probability. We investigated user reaction for research the propriety of suggested system and alternation propriety.

  • PDF

Intelligent Countenance Robot, Humanoid ICHR (지능형 표정로봇, 휴머노이드 ICHR)

  • Byun, Sang-Zoon
    • Proceedings of the KIEE Conference
    • /
    • 2006.10b
    • /
    • pp.175-180
    • /
    • 2006
  • In this paper, we develope a type of humanoid robot which can express its emotion against human actions. To interact with human, the developed robot has several abilities to express its emotion, which are verbal communication with human through voice/image recognition, motion tracking, and facial expression using fourteen Servo Motors. The proposed humanoid robot system consists of a control board designed with AVR90S8535 to control servor motors, a framework equipped with fourteen server motors and two CCD cameras, a personal computer to monitor its operations. The results of this research illustrate that our intelligent emotional humanoid robot is very intuitive and friendly so human can interact with the robot very easily.

  • PDF

Mapping facial expression onto internal states (얼굴표정에 의한 내적상태 추정)

  • 한재현;정찬섭
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1998.04a
    • /
    • pp.118-123
    • /
    • 1998
  • 얼굴표정과 내적상태의 관계 모형을 수립하기 위한 기초 자료로서 얼굴표정과 내적상태의 대응관계를 조사하였다. 심리적으로 최소유의미거리에 있는 두 내적상태는 서로 구별되는 얼굴표정과 내적상태의 일대일 대응 관계가 성립한다는 것을 발결하였다. 얼굴표정 차원값과 내적상태 차원값의 관계 구조를 파악하기 위하여 중다회귀분석을 실시한 결과, 쾌-불쾌상태는 입의 너비에 의해서, 각성-수면상태는 눈과 입이 열린 정도에 의해서 얼굴표정에 민감하게 반영되는 것으로 나타났다. 얼굴표정 차원 열 두개가 내적상태 차원 상의 변화를 설명하는 정도는 40%내외였다. 선형모형이 이처럼 높은 예측력을 갖는다는 것은 이 두 변수 사이에 비교적 단순한 수리적 대응 구조가 존재한다는 것을 암시한다.

  • PDF

3D Facial Modeling and Expression Synthesis using muscle-based model for MPEG-4 SNHC (MPEG-4 SNHC을 위한 3차원 얼굴 모델링 및 근육 모델을 이용한 표정합성)

  • 김선욱;심연숙;변혜란;정찬섭
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1999.11a
    • /
    • pp.368-372
    • /
    • 1999
  • 새롭게 표준화된 멀티미디어 동영상 파일 포맷인 MPEG-4에는 자연영상과 소리뿐만 아니라 합성된 그래픽과 소리를 포함하고 있다. 특히 화상회의나 가상환경의 아바타를 구성하기 위한 모델링과 에니메이션을 위한 FDP, FAP에 대한 표준안을 포함하고 있다. 본 논문은 MPEG-4에서 정의한 FDP와 FAP를 이용하여 화상회의나 가상환경의 아바타로 자연스럽고 현실감 있게 사용할 수 있는 얼굴 모델 생성을 위해서 보다 정교한 일반모델을 사용하고, 이에 근육 모델을 사용하여 보다 정밀한 표정 생성을 위해서 임의의 위치에 근육을 생성 할 수 있도록 근육 편집기를 작성하여, 표정 에니메이션을 수행할 수 있도록 에니메이션 편집 프로그램을 구현하였다.

  • PDF

A DB for facial expression and its user-interface (얼굴표정 DB 및 사용자 인터페이스 개발)

  • 한재현;문수종;김진관;김영아;홍상욱;심연숙;반세범;변혜란;오경자
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1999.11a
    • /
    • pp.373-378
    • /
    • 1999
  • 얼굴 및 얼굴표정 연구의 기초 자료를 제공하고 실제 표정을 디자인하는 작업의 지침으로 사용되도록 하기 위하여 대규모의 표정 DB를 구축하였다. 이 DB 내에는 여러 가지 방법으로 수집된 배우 24명의 자연스럽고 다양한 표정 영상자료 약 1,500장이 저장되어 있다. 수집된 표정자료 각각에 대하여 내적상태의 범주모형과 차원모형을 모두 고려하여 다수의 사람들이 반응한 내적상태 평정 정보를 포함하도록 하였으며 사진별로 평정의 일치율을 기록함으로써 자료 이용에 참고할 수 있도록 하였다. 표정인식 및 합성 시스템에 사용될 수 있도록 각 표정자료들을 한국인 표준형 상모형에 정합하였을 때 측정된 MPEG-4 FAP 기준 39개 꼭지점들(vertices)의 좌표값들 및 표정추출의 맥락정보를 저장하였다. 실제 DB를 사용할 사람들이 가진 한정된 정보로써 전체 DB의 영상자료들을 용이하게 검색할 수 있도록 사용자 인터페이스를 개발하였다.

  • PDF

Emotion-based Real-time Facial Expression Matching Dialogue System for Virtual Human (감정에 기반한 가상인간의 대화 및 표정 실시간 생성 시스템 구현)

  • Kim, Kirak;Yeon, Heeyeon;Eun, Taeyoung;Jung, Moonryul
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.23-29
    • /
    • 2022
  • Virtual humans are implemented with dedicated modeling tools like Unity 3D Engine in virtual space (virtual reality, mixed reality, metaverse, etc.). Various human modeling tools have been introduced to implement virtual human-like appearance, voice, expression, and behavior similar to real people, and virtual humans implemented via these tools can communicate with users to some extent. However, most of the virtual humans so far have stayed unimodal using only text or speech. As AI technologies advance, the outdated machine-centered dialogue system is now changing to a human-centered, natural multi-modal system. By using several pre-trained networks, we implemented an emotion-based multi-modal dialogue system, which generates human-like utterances and displays appropriate facial expressions in real-time.