• Title/Summary/Keyword: Facial emotion

Search Result 311, Processing Time 0.026 seconds

Facial Data Visualization for Improved Deep Learning Based Emotion Recognition

  • Lee, Seung Ho
    • Journal of Information Science Theory and Practice
    • /
    • v.7 no.2
    • /
    • pp.32-39
    • /
    • 2019
  • A convolutional neural network (CNN) has been widely used in facial expression recognition (FER) because it can automatically learn discriminative appearance features from an expression image. To make full use of its discriminating capability, this paper suggests a simple but effective method for CNN based FER. Specifically, instead of an original expression image that contains facial appearance only, the expression image with facial geometry visualization is used as input to CNN. In this way, geometric and appearance features could be simultaneously learned, making CNN more discriminative for FER. A simple CNN extension is also presented in this paper, aiming to utilize geometric expression change derived from an expression image sequence. Experimental results on two public datasets (CK+ and MMI) show that CNN using facial geometry visualization clearly outperforms the conventional CNN using facial appearance only.

An Exploratory Investigation on Visual Cues for Emotional Indexing of Image (이미지 감정색인을 위한 시각적 요인 분석에 관한 탐색적 연구)

  • Chung, SunYoung;Chung, EunKyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.48 no.1
    • /
    • pp.53-73
    • /
    • 2014
  • Given that emotion-based computing environment has grown recently, it is necessary to focus on emotional access and use of multimedia resources including images. The purpose of this study aims to identify the visual cues for emotion in images. In order to achieve it, this study selected five basic emotions such as love, happiness, sadness, fear, and anger and interviewed twenty participants to demonstrate the visual cues for emotions. A total of 620 visual cues mentioned by participants were collected from the interview results and coded according to five categories and 18 sub-categories for visual cues. Findings of this study showed that facial expressions, actions / behaviors, and syntactic features were found to be significant in terms of perceiving a specific emotion of the image. An individual emotion from visual cues demonstrated distinctive characteristics. The emotion of love showed a higher relation with visual cues such as actions and behaviors, and the happy emotion is substantially related to facial expressions. In addition, the sad emotion was found to be perceived primarily through actions and behaviors and the fear emotion is perceived considerably through facial expressions. The anger emotion is highly related to syntactic features such as lines, shapes, and sizes. Findings of this study implicated that emotional indexing could be effective when content-based features were considered in combination with concept-based features.

Hybrid Facial Representations for Emotion Recognition

  • Yun, Woo-Han;Kim, DoHyung;Park, Chankyu;Kim, Jaehong
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.1021-1028
    • /
    • 2013
  • Automatic facial expression recognition is a widely studied problem in computer vision and human-robot interaction. There has been a range of studies for representing facial descriptors for facial expression recognition. Some prominent descriptors were presented in the first facial expression recognition and analysis challenge (FERA2011). In that competition, the Local Gabor Binary Pattern Histogram Sequence descriptor showed the most powerful description capability. In this paper, we introduce hybrid facial representations for facial expression recognition, which have more powerful description capability with lower dimensionality. Our descriptors consist of a block-based descriptor and a pixel-based descriptor. The block-based descriptor represents the micro-orientation and micro-geometric structure information. The pixel-based descriptor represents texture information. We validate our descriptors on two public databases, and the results show that our descriptors perform well with a relatively low dimensionality.

Trust and facial information in online negotiation (온라인 협상에서 얼굴 정보에 따른 신뢰감 비교)

  • Ji, Jae-Yeong;Han, Gwang-Hui
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2007.05a
    • /
    • pp.153-156
    • /
    • 2007
  • 본 연구는 온라인으로 이루어지는 협상에서 얼굴정보가 협상의 신뢰 수준에 영향을 주는지 보고자 한다. 협상의 진행이 실시간 문자 대화(Synchronous text communication) 또는 음성(Voice)으로 이루어지는 조건과 얼굴 정보가 추가된 조건을 비교한 결과, 얼굴 정보가 추가되었을 때 신뢰의 수준이 높게 나타났다.

  • PDF

Relationship between the Level of Depression and Facial EMG Responses Induced by Humor among Children (유머에 의해 유발된 아동의 안면근육반응과 우울 수준과의 관계)

  • Jang, Eun-Hye;Lee, Ju-Ok;Sohn, Sun-Ju;Lee, Young-Chang;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.33-40
    • /
    • 2010
  • The study is to examine relationship between the level of depression and facial EMG responses during the humor condition. Forty-three children(age range 22-49 years) participated in the study. The Korean Personality Inventory for Children(KPI-C) was used to measure the level of depression in children. While children were presented to audio-visual film clip inducing humor, facial EMG were measured on their faces(bilateral corrugators and orbicularis). A baseline state was measured during 60 seconds before the presentation of the stimulus, i.e., emotional state lasting 120 seconds. Participants were asked to report the intensity of their experienced emotion. The results of emotion assessment showed 95.3% appropriateness and 3.81 intensity on the 5 points Likert scale). Facial EMG showed a significant increase while participants experiencing humor compared to baseline state. Additionally, the result showed a negative correlation between right corrugator responses and the level of depression. The study findings showed the more children experienced depression, the less facial EMG activity they had while experiencing humor.

  • PDF

Deep Reinforcement Learning-Based Cooperative Robot Using Facial Feedback (표정 피드백을 이용한 딥강화학습 기반 협력로봇 개발)

  • Jeon, Haein;Kang, Jeonghun;Kang, Bo-Yeong
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.264-272
    • /
    • 2022
  • Human-robot cooperative tasks are increasingly required in our daily life with the development of robotics and artificial intelligence technology. Interactive reinforcement learning strategies suggest that robots learn task by receiving feedback from an experienced human trainer during a training process. However, most of the previous studies on Interactive reinforcement learning have required an extra feedback input device such as a mouse or keyboard in addition to robot itself, and the scenario where a robot can interactively learn a task with human have been also limited to virtual environment. To solve these limitations, this paper studies training strategies of robot that learn table balancing tasks interactively using deep reinforcement learning with human's facial expression feedback. In the proposed system, the robot learns a cooperative table balancing task using Deep Q-Network (DQN), which is a deep reinforcement learning technique, with human facial emotion expression feedback. As a result of the experiment, the proposed system achieved a high optimal policy convergence rate of up to 83.3% in training and successful assumption rate of up to 91.6% in testing, showing improved performance compared to the model without human facial expression feedback.

The Implementation and Analysis of Facial Expression Customization for a Social Robot (소셜 로봇의 표정 커스터마이징 구현 및 분석)

  • Jiyeon Lee;Haeun Park;Temirlan Dzhoroev;Byounghern Kim;Hui Sung Lee
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.2
    • /
    • pp.203-215
    • /
    • 2023
  • Social robots, which are mainly used by individuals, emphasize the importance of human-robot relationships (HRR) more compared to other types of robots. Emotional expression in robots is one of the key factors that imbue HRR with value; emotions are mainly expressed through the face. However, because of cultural and preference differences, the desired robot facial expressions differ subtly depending on the user. It was expected that a robot facial expression customization tool may mitigate such difficulties and consequently improve HRR. To prove this, we created a robot facial expression customization tool and a prototype robot. We implemented a suitable emotion engine for generating robot facial expressions in a dynamic human-robot interaction setting. We conducted experiments and the users agreed that the availability of a customized version of the robot has a more positive effect on HRR than a predefined version of the robot. Moreover, we suggest recommendations for future improvements of the customization process of robot facial expression.

The Impact of Gesture and Facial Expression on Learning Comprehension and Persona Effect of Pedagogical Agent (학습용 에이전트의 제스처와 얼굴표정이 학습이해도 및 의인화 효과에 미치는 영향)

  • Ryu, Jeeheon;Yu, Jeehee
    • Science of Emotion and Sensibility
    • /
    • v.16 no.3
    • /
    • pp.281-292
    • /
    • 2013
  • The purpose of this study was to identify the effect of gesture and facial expression on persona effects. Fifty-six college students were recruited for this study, and non-verbal communication skills were applied to a pedagogical agent with gesture (conversational vs. deictic) and facial expression. The conversational gesture may have relationship with social interaction hypothesis of pedagogical agent while the deictic gesture may have relationship with attentional guidance hypothesis. The facial expression can be assumed to facilitate the social interaction between the pedagogical agent and learners. Interestingly, the conversational gesture group showed a tendency of outperforming the deictic gesture group. It may imply that the social interaction theory has a strong impact on cognitive support as well as social interaction for learners. There was a significant interaction effect on the engagement when both of facial expression and conversational gesture were applied. This result has two implications. First, facial expression can facilitate the persona effect for engagement.

  • PDF

The Validation Study of Shaping Comfortable Environments Based on the PMV Index Using Facial Skin Temperature (안면 피부온도를 활용한 PMV 지표 기반 쾌적환경 조성의 타당성 연구)

  • Kim, Boseong;Min, Yoon-Ki;Shin, Esther;Kim, Jin-Ho
    • Science of Emotion and Sensibility
    • /
    • v.16 no.3
    • /
    • pp.311-318
    • /
    • 2013
  • This research examined the validity of whether the PMV index-based comfort- or uncomfort-indoor environments could be classified by the facial skin temperature, one of the physiological indicator for human. To do this, we distinguished between a comfort thermal environment and an uncomfort thermal environment using the PMV value, and then facial skin temperatures were measured in both environments. As a result, the facial skin temperature of occupants were different between the comfort- and uncomfort-indoor environments. It suggested that the facial skin temperature could be used in shaping the comfortable indoor environment based on the PMV index. While this result suggested the PMV index-based on comfort and uncomfort indoor environments could not be valid, because the facial skin temperature was lower in the uncomfort thermal environment than in the comfort thermal environment.

  • PDF