• Title/Summary/Keyword: facial emotional expression

Search Result 126, Processing Time 0.022 seconds

Effects of Working Memory Load on Negative Facial Emotion Processing: an ERP study (작업기억 부담이 부적 얼굴정서 처리에 미치는 영향: ERP 연구)

  • Park, Taejin;Kim, Junghee
    • Korean Journal of Cognitive Science
    • /
    • v.29 no.1
    • /
    • pp.39-59
    • /
    • 2018
  • To elucidate the effect of working memory (WM) load on negative facial emotion processing, we examined ERP components (P1 and N170) elicited by fearful and neutral expressions each of which was presented during 0-back (low-WM load) or 2-back (high-WM load) tasks. During N-back tasks, visual objects were presented one by one as targets and each of facial expressions was presented as a passively observed stimulus during intervals between targets. Behavioral results showed more accurate and fast responses at low-WM load condition compared to high-WM load condition. Analysis of mean amplitudes of P1 on the occipital region showed significant WM load effect (high-WM load > low-WM load) but showed nonsignificant facial emotion effect. Analysis of mean amplitudes of N170 on the posterior occipito-temporal region showed significant overall facial emotion effect (fearful > neutral), but, in detail, significant facial emotion effect was observed only at low-WM load condition on the left hemisphere, but was observed at high-WM load condition as well as low-WM load condition on the right hemisphere. To summarize, facial emotion effect observed by N170 amplitudes was modulated by WM load only on the left hemisphere. These results show that early emotional processing of negative facial expression could be eliminated or reduced by high load of WM on the left hemisphere, but could not be eliminated by high load on the right hemisphere, and suggest right hemispheric lateralization of negative facial emotion processing.

Crossmodal Perception of Mismatched Emotional Expressions by Embodied Agents (에이전트의 표정과 목소리 정서의 교차양상지각)

  • Cho, Yu-Suk;Suk, Ji-He;Han, Kwang-Hee
    • Science of Emotion and Sensibility
    • /
    • v.12 no.3
    • /
    • pp.267-278
    • /
    • 2009
  • Today an embodied agent generates a large amount of interest because of its vital role for human-human interactions and human-computer interactions in virtual world. A number of researchers have found that we can recognize and distinguish between various emotions expressed by an embodied agent. In addition many studies found that we respond to simulated emotions in a similar way to human emotion. This study investigates interpretation of mismatched emotions expressed by an embodied agent (e.g. a happy face with a sad voice); whether audio-visual channel integration occurs or one channel dominates when participants judge the emotion. The study employed a 4 (visual: happy, sad, warm, cold) $\times$ 4 (audio: happy, sad, warm, cold) within-subjects repeated measure design. The results suggest that people perceive emotions not depending on just one channel but depending on both channels. Additionally facial expression (happy face vs. sad face) makes a difference in influence of two channels; Audio channel has more influence in interpretation of emotions when facial expression is happy. People were able to feel other emotion which was not expressed by face or voice from mismatched emotional expressions, so there is a possibility that we may express various and delicate emotions with embodied agent by using only several kinds of emotions.

  • PDF

Moderating Effects of User Gender and AI Voice on the Emotional Satisfaction of Users When Interacting with a Voice User Interface (음성 인터페이스와의 상호작용에서 AI 음성이 성별에 따른 사용자의 감성 만족도에 미치는 영향)

  • Shin, Jong-Gyu;Kang, Jun-Mo;Park, Yeong-Jin;Kim, Sang-Ho
    • Science of Emotion and Sensibility
    • /
    • v.25 no.3
    • /
    • pp.127-134
    • /
    • 2022
  • This study sought to identify the voice user interface (VUI) design parameters that evoked positive user emotions. Six VUI design parameters that could affect emotional user satisfaction were considered. The moderating effects of user gender and the design parameters were analyzed to determine the appropriate conditions for user satisfaction when interacting with the VUI. An interactive VUI system that could modify the six parameters was implemented using the Wizard of OZ experimental method. User emotions were assessed from the users' facial expression data, which was then converted into a valence score. The frequency analysis and chi-square test found that there were statistically significant moderating gender and AI effects. These results implied that it is beneficial to consider the users' gender when designing voice-based interactions. Adult/male/high-tone voices for males and adult/female/mid-tone voices for females are recommended as general guidelines for future VUI designs. Future analyses that consider various human factors will be able to more delicately assess human-AI interactions from a UX perspective.

Development of FACS-based Android Head for Emotional Expressions (감정표현을 위한 FACS 기반의 안드로이드 헤드의 개발)

  • Choi, Dongwoon;Lee, Duk-Yeon;Lee, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.537-544
    • /
    • 2020
  • This paper proposes the creation of an android robot head based on the facial action coding system(FACS), and the generation of emotional expressions by FACS. The term android robot refers to robots with human-like appearance. These robots have artificial skin and muscles. To make the expression of emotions, the location and number of artificial muscles had to be determined. Therefore, it was necessary to anatomically analyze the motions of the human face by FACS. In FACS, expressions are composed of action units(AUs), which work as the basis of determining the location and number of artificial muscles in the robots. The android head developed in this study had servo motors and wires, which corresponded to 30 artificial muscles. Moreover, the android head was equipped with artificial skin in order to make the facial expressions. Spherical joints and springs were used to develop micro-eyeball structures, and the arrangement of the 30 servo motors was based on the efficient design of wire routing. The developed android head had 30-DOFs and could express 13 basic emotions. The recognition rate of these basic emotional expressions was evaluated at an exhibition by spectators.

A Study on the Image of Male Flight Attendant on Customer Satisfaction

  • Kim, Min-Ji;Park, Hye-Yoon;Park, So-Yeon
    • Journal of Distribution Science
    • /
    • v.15 no.8
    • /
    • pp.37-46
    • /
    • 2017
  • Purpose - Many studies have shown the effects of the external images of female flight attendants on the customers' satisfaction. Recently, the perception of male flight attendants has become more important and positive, and airlines are hiring a significant number of male flight attendants every year. Due to the lack of research on the male flight attendant, however, the images of male flight attendants were investigated for this study. Research, design, data and methodology - Using survey techniques with 204 respondents, this study used analytical data based their resulting analysis. Results - The study examined whether the image of the male flight attendant affects the cognitive and emotional perceptions of customers. The focus of the present study is the external image of the male flight attendant, and the following image-component divisions were formed: hairstyle, body type, uniform, speech, and facial expression. Conclusions - The study purpose sought to determine whether the image of the male flight attendant exert effects on the emotional and cognitive images of airlines, and if these images have a positive effect on the customers' satisfaction and loyalty for an airline, so that airlines can use the external image of the male flight attendant to help with its own image reinforcement.

Intelligent Countenance Robot, Humanoid ICHR (지능형 표정로봇, 휴머노이드 ICHR)

  • Byun, Sang-Zoon
    • Proceedings of the KIEE Conference
    • /
    • 2006.10b
    • /
    • pp.175-180
    • /
    • 2006
  • In this paper, we develope a type of humanoid robot which can express its emotion against human actions. To interact with human, the developed robot has several abilities to express its emotion, which are verbal communication with human through voice/image recognition, motion tracking, and facial expression using fourteen Servo Motors. The proposed humanoid robot system consists of a control board designed with AVR90S8535 to control servor motors, a framework equipped with fourteen server motors and two CCD cameras, a personal computer to monitor its operations. The results of this research illustrate that our intelligent emotional humanoid robot is very intuitive and friendly so human can interact with the robot very easily.

  • PDF

Study on Laughter-arousing Factors of Character Designs of Kakaotalk Emoticons (카카오톡 이모티콘 캐릭터 디자인에서 웃음 유발 요인에 관한 연구)

  • Lee, Eunkoung;Choi, Myoungsik;Kim, Cheeyong
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.2
    • /
    • pp.253-259
    • /
    • 2015
  • The spread of smartphones enabled the two-way communication through the Internet, and character emoticons added fun and comfort to communication among users. To figure out laughter-arousing factors, a survey about emoticon designs of Kakaotalk Itemstore was conducted, targeting university students in their twenties. An interaction of 'user-preferred emoticons' and 'laughter-arousing emoticons' was analyzed. 90% of 'user-preferred emoticons' were 'humorous emoticons', and 84% of respondents answered that there is a relationship between 'user-preferred emoticons' and 'laughter-arousing emoticons'. Consequently, a high interaction between the 'rank of emoticons' and 'laughter-arousing emoticons' was derived. Also, factors of laughter-arousing emoticons were analyzed by studying the ranks of emoticon from 1st to 8th of Kakaotalk Itemstore. Two-divisional figures reminding pure kids, maximization of emotional expression by omission or exaggeration of mouth, and smoothness by concave curves aroused laughter. Intuitively understandable gestures were employed in terms of action. Furthermore, two-divisional figures' acting with comparably small body, hands and foot to their head, and people-mimicking motions of animals with cuteness and familiarity enabled arousal of laughter. In facial expressions, humorous articulation of sad, busy or mad expression enabled positive communion among users.

Comparison Between Core Affect Dimensional Structures of Different Ages using Representational Similarity Analysis (표상 유사성 분석을 이용한 연령별 얼굴 정서 차원 비교)

  • Jongwan Kim
    • Science of Emotion and Sensibility
    • /
    • v.26 no.1
    • /
    • pp.33-42
    • /
    • 2023
  • Previous emotion studies employing facial expressions have focused on the differences between age groups for each of the emotion categories. Instead, Kim (2021) has compared representations of facial expressions in the lower-dimensional emotion space. However, he reported descriptive comparisons without statistical significance testing. This research used representational similarity analysis (Kriegeskorte et al., 2008) to directly compare empirical datasets from young, middle-aged, and old groups and conceptual models. In addition, individual differences multidimensional scaling (Carroll & Chang, 1970) was conducted to explore individual weights on the emotional dimensions for each age group. The results revealed that the old group was the least similar to the other age groups in the empirical datasets and the valence model. In addition, the arousal dimension was the least weighted for the old group compared to the other groups. This study directly tested the differences between the three age groups in terms of empirical datasets, conceptual models, and weights on the emotion dimensions.

The Effects of Emotional Interaction with Virtual Student on the User's Eye-fixation and Virtual Presence in the Teaching Simulation (가상현실 수업시뮬레이션에서 가상학생과의 정서적 상호작용이 사용자의 시선응시 및 가상실재감에 미치는 영향)

  • Ryu, Jeeheon;Kim, Kukhyeon
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.2
    • /
    • pp.581-593
    • /
    • 2020
  • The purpose of this study was to examine the eye-fixation times on different parts of a student avatar and the virtual presence with two scenarios in the virtual reality-based teaching simulation. This study was to identify user attention while he or she is interacting with a student avatar. By examining where a user is gazing during a conversation with the avatar, we have a better understanding of non-verbal communication. For this study, forty-five college students (21 females and 24 males) participated in the experiment. They had a conversation with a student avatar in a virtual reality-based teaching simulation. The participants had verbal interactions with the student avatar with two scenarios. While they were having a conversation with the virtual character in the teaching simulation, their eye-movements were collected through a head-mounted display with an eye-tracking function embedded. The results revealed that there were significant differences in eye-fixation times. Participants gazed a longer time on facial expression than any other area. The fixation time on the facial expression was more prolonged than on gestures (F=3.75, p<.05). However, the virtual presence was not significantly different in two scenario levels. This result suggested that users focus on the face more than the gesture when they emotionally interact with the virtual character.

Emotion Based Gesture Animation Generation Mobile System (감정 기반 모바일 손제스쳐 애니메이션 제작 시스템)

  • Lee, Jung-Suk;Byun, Hae-Won
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.129-134
    • /
    • 2009
  • Recently, percentage of people who use SMS service is increasing. However, it is difficult to express own complicated emotion with text and emoticon of exited SMS service. This paper focuses on that point and practical uses character animation to express emotion and nuance correctly, funny. Also this paper suggests emotion based gesture animation generation system that use character's facial expression and gesture to delivery emotion excitably and clearly than only speaking. Michel[1] investigated interview movies of a person whose gesturing style they wish to animate and suggested gesture generation graph for stylized gesture animation. In this paper, we make focus to analyze and abstracted emotional gestures of Disney animation characters and did 3D modeling of these emotional gestures expanding Michel[1]'s research. To express emotion of person, suggests a emotion gesture generation graph that reflects emotion flow graph express emotion flow for probability. We investigated user reaction for research the propriety of suggested system and alternation propriety.

  • PDF