• 제목/요약/키워드: Facial emotion

Search Result 312, Processing Time 0.035 seconds

Color and Blinking Control to Support Facial Expression of Robot for Emotional Intensity (로봇 감정의 강도를 표현하기 위한 LED 의 색과 깜빡임 제어)

  • Kim, Min-Gyu;Lee, Hui-Sung;Park, Jeong-Woo;Jo, Su-Hun;Chung, Myung-Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.547-552
    • /
    • 2008
  • Human and robot will have closer relation in the future, and we can expect that the interaction between human and robot will be more intense. To take the advantage of people's innate ability of communication, researchers concentrated on the facial expression so far. But for the robot to express emotional intensity, other modalities such as gesture, movement, sound, color are also needed. This paper suggests that the intensity of emotion can be expressed with color and blinking so that it is possible to apply the result to LED. Color and emotion definitely have relation, however, the previous results are difficult to implement due to the lack of quantitative data. In this paper, we determined color and blinking period to express the 6 basic emotions (anger, sadness, disgust, surprise, happiness, fear). It is implemented on avatar and the intensities of emotions are evaluated through survey. We figured out that the color and blinking helped to express the intensity of emotion for sadness, disgust, anger. For fear, happiness, surprise, the color and blinking didn't play an important role; however, we may improve them by adjusting the color or blinking.

  • PDF

Exploration of deep learning facial motions recognition technology in college students' mental health (딥러닝의 얼굴 정서 식별 기술 활용-대학생의 심리 건강을 중심으로)

  • Li, Bo;Cho, Kyung-Duk
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.333-340
    • /
    • 2022
  • The COVID-19 has made everyone anxious and people need to keep their distance. It is necessary to conduct collective assessment and screening of college students' mental health in the opening season of every year. This study uses and trains a multi-layer perceptron neural network model for deep learning to identify facial emotions. After the training, real pictures and videos were input for face detection. After detecting the positions of faces in the samples, emotions were classified, and the predicted emotional results of the samples were sent back and displayed on the pictures. The results show that the accuracy is 93.2% in the test set and 95.57% in practice. The recognition rate of Anger is 95%, Disgust is 97%, Happiness is 96%, Fear is 96%, Sadness is 97%, Surprise is 95%, Neutral is 93%, such efficient emotion recognition can provide objective data support for capturing negative. Deep learning emotion recognition system can cooperate with traditional psychological activities to provide more dimensions of psychological indicators for health.

Effect Analysis of Data Imbalance for Emotion Recognition Based on Deep Learning (딥러닝기반 감정인식에서 데이터 불균형이 미치는 영향 분석)

  • Hajin Noh;Yujin Lim
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.8
    • /
    • pp.235-242
    • /
    • 2023
  • In recent years, as online counseling for infants and adolescents has increased, CNN-based deep learning models are widely used as assistance tools for emotion recognition. However, since most emotion recognition models are trained on mainly adult data, there are performance restrictions to apply the model to infants and adolescents. In this paper, in order to analyze the performance constraints, the characteristics of facial expressions for emotional recognition of infants and adolescents compared to adults are analyzed through LIME method, one of the XAI techniques. In addition, the experiments are performed on the male and female groups to analyze the characteristics of gender-specific facial expressions. As a result, we describe age-specific and gender-specific experimental results based on the data distribution of the pre-training dataset of CNN models and highlight the importance of balanced learning data.

A Case Study of Emotion Expression Technologies for Emotional Characters (감성캐릭터의 감정표현 기술의 사례분석)

  • Ahn, Seong-Hye;Paek, Seon-Uck;Sung, Min-Young;Lee, Jun-Ha
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.9
    • /
    • pp.125-133
    • /
    • 2009
  • As interactivity is becoming one of the key success factors in today's digital communication environment, increasing emphasis is being placed on technologies for user-oriented emotion expression. We aim for development of enabling technologies for creation of emotional characters who can express personalized emotions in real-time. In this paper, we conduct a survey on domestic and international researches and case studies for emotional characters with a focus on facial expression. The survey result is believed to have its meaning as a guideline for future research direction.

The Effects of Priming Emotion among College Students at the Processes of Words Negativity Information (유발된 정서가 대학생의 부정적 어휘정보 처리에 미치는 효과)

  • Kim, Choong-Myung
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.10
    • /
    • pp.318-324
    • /
    • 2020
  • The present study was conducted to investigate the influences of emotion priming and the number of negation words on the task of sentential predicate reasoning in groups with or without anxiety symptoms. 3 types of primed emotions and 2 types of stimulus and 3 conditions of negation words were used as a within-subject variable. The subjects were instructed to make facial expressions that match the directions, and were asked to choose the correct answer from the given examples. Mixed repeated measured ANOVA analyses on reaction time first showed main effects for the variables of emotion, stimulus, number of negation words and anxiety level, and the interaction effects for the negation words x anxiety combination. These results are presumably suggested to reflect that externally intervening emotion works on language comprehension in a way that anxiety could delay task processing speed regardless of the emotion and stimulus type, meanwhile the number of negation words can slower language processing only in a anxiety group. Implications and limitations were discussed for the future work.

Design and Implementation of a Real-Time Emotional Avatar (실시간 감정 표현 아바타의 설계 및 구현)

  • Jung, Il-Hong;Cho, Sae-Hong
    • Journal of Digital Contents Society
    • /
    • v.7 no.4
    • /
    • pp.235-243
    • /
    • 2006
  • This paper presents the development of certain efficient method for expressing the emotion of an avatar based on the facial expression recognition. This new method is not changing a facial expression of the avatar manually. It can be changing a real time facial expression of the avatar based on recognition of a facial pattern which can be captured by a web cam. It provides a tool for recognizing some part of images captured by the web cam. Because of using the model-based approach, this tool recognizes the images faster than other approaches such as the template-based or the network-based. It is extracting the shape of user's lip after detecting the information of eyes by using the model-based approach. By using changes of lip's patterns, we define 6 patterns of avatar's facial expression by using 13 standard lip's patterns. Avatar changes a facial expression fast by using the pre-defined avatar with corresponding expression.

  • PDF

Korean Emotion Vocabulary: Extraction and Categorization of Feeling Words (한국어 감정표현단어의 추출과 범주화)

  • Sohn, Sun-Ju;Park, Mi-Sook;Park, Ji-Eun;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.15 no.1
    • /
    • pp.105-120
    • /
    • 2012
  • This study aimed to develop a Korean emotion vocabulary list that functions as an important tool in understanding human feelings. In doing so, the focus was on the careful extraction of most widely used feeling words, as well as categorization into groups of emotion(s) in relation to its meaning when used in real life. A total of 12 professionals (including Korean major graduate students) partook in the study. Using the Korean 'word frequency list' developed by Yonsei University and through various sorting processes, the study condensed the original 64,666 emotion words into a finalized 504 words. In the next step, a total of 80 social work students evaluated and classified each word for its meaning and into any of the following categories that seem most appropriate for inclusion: 'happiness', 'sadness', 'fear', 'anger', 'disgust', 'surprise', 'interest', 'boredom', 'pain', 'neutral', and 'other'. Findings showed that, of the 504 feeling words, 426 words expressed a single emotion, whereas 72 words reflected two emotions (i.e., same word indicating two distinct emotions), and 6 words showing three emotions. Of the 426 words that represent a single emotion, 'sadness' was predominant, followed by 'anger' and 'happiness'. Amongst 72 words that showed two emotions were mostly a combination of 'anger' and 'disgust', followed by 'sadness' and 'fear', and 'happiness' and 'interest'. The significance of the study is on the development of a most adaptive list of Korean feeling words that can be meticulously combined with other emotion signals such as facial expression in optimizing emotion recognition research, particularly in the Human-Computer Interface (HCI) area. The identification of feeling words that connote more than one emotion is also noteworthy.

  • PDF

Effects of Gender, Age and Affective Dimensions on Facial Attractiveness (성별, 연령, 감성차원이 얼굴매력에 미치는 영향)

  • Cho, Kyung Ja;Jung, Woo Hyun;Lee, Seung Bok;Ku, Yea Shil
    • Science of Emotion and Sensibility
    • /
    • v.19 no.1
    • /
    • pp.21-30
    • /
    • 2016
  • The aim of this study was to find whether the perception of facial attractiveness is influenced by gender, age and kinds of affective dimensions(sharp/soft, babyish/mature). In the study the participants (48 elementary school students, 44 middle school students, 39 university students comprised of 60 males and 71 females) were shown the photos of sixty female faces and asked to grade each face on a nine point scale in three different dimensions (sharp/soft, babyish/mature, attractive/unattractive). Results using Multi-level analysis, faces that were babyish and soft were perceived as more attractive regardless of gender and age. But differences were found in the strength of facial attractiveness by gender and age. Two groups (elementary school students and middle school students) perceived the same photos of female faces to be less attractive than the university students. Also, male participants perceived the faces to be less attractive than female participants. Moreover the study showed a significant difference between university students and elementary school students in relation to the sharp/soft dimension and this dimension was more influential on elementary school students than university students. These results further suggest that if a face looks babyish and soft, then the face will be perceived as attractive regardless of gender or age. However, the degree of facial attractiveness perceived depends on the participant's gender and age.

Power affects emotional awareness: The moderating role of emotional intelligence and goal-relevance (정서인식과 권력의 관계: 정서지능과 목표관련성의 조절효과 검증)

  • Lee, Suran;Lee, Won Pyo;Kim, Kaeun;Youm, Joon-Kyoo;Sohn, Young Woo
    • Science of Emotion and Sensibility
    • /
    • v.16 no.4
    • /
    • pp.433-444
    • /
    • 2013
  • The purpose of this study is to investigate the moderating role of emotional intelligence (EI) and goal-relevance in the relationship between power and emotional awareness. In Study 1, participants were ask to correctly indicate presented facial expressions of others after completing EI survey. Half of the participants were randomly assigned to the "power" condition and the other half to the "powerless" condition. In Study 2, goal-relevance of expressed emotion was manipulated. The results showed that EI moderated the relationship between power and emotion decoding ability. While participants with high and low levels of EI were not significantly affected by power condition, participants with middle level of EI were strongly influenced by the effect of power. In addition, the role of goal-relevance significantly moderated the relationship between power and emotional awareness. When correctly indicating other's emotion became important and thus emotional awareness was strongly associated with participants' goal, those who had power performed better than before.

The Intelligent Determination Model of Audience Emotion for Implementing Personalized Exhibition (개인화 전시 서비스 구현을 위한 지능형 관객 감정 판단 모형)

  • Jung, Min-Kyu;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.39-57
    • /
    • 2012
  • Recently, due to the introduction of high-tech equipment in interactive exhibits, many people's attention has been concentrated on Interactive exhibits that can double the exhibition effect through the interaction with the audience. In addition, it is also possible to measure a variety of audience reaction in the interactive exhibition. Among various audience reactions, this research uses the change of the facial features that can be collected in an interactive exhibition space. This research develops an artificial neural network-based prediction model to predict the response of the audience by measuring the change of the facial features when the audience is given stimulation from the non-excited state. To present the emotion state of the audience, this research uses a Valence-Arousal model. So, this research suggests an overall framework composed of the following six steps. The first step is a step of collecting data for modeling. The data was collected from people participated in the 2012 Seoul DMC Culture Open, and the collected data was used for the experiments. The second step extracts 64 facial features from the collected data and compensates the facial feature values. The third step generates independent and dependent variables of an artificial neural network model. The fourth step extracts the independent variable that affects the dependent variable using the statistical technique. The fifth step builds an artificial neural network model and performs a learning process using train set and test set. Finally the last sixth step is to validate the prediction performance of artificial neural network model using the validation data set. The proposed model is compared with statistical predictive model to see whether it had better performance or not. As a result, although the data set in this experiment had much noise, the proposed model showed better results when the model was compared with multiple regression analysis model. If the prediction model of audience reaction was used in the real exhibition, it will be able to provide countermeasures and services appropriate to the audience's reaction viewing the exhibits. Specifically, if the arousal of audience about Exhibits is low, Action to increase arousal of the audience will be taken. For instance, we recommend the audience another preferred contents or using a light or sound to focus on these exhibits. In other words, when planning future exhibitions, planning the exhibition to satisfy various audience preferences would be possible. And it is expected to foster a personalized environment to concentrate on the exhibits. But, the proposed model in this research still shows the low prediction accuracy. The cause is in some parts as follows : First, the data covers diverse visitors of real exhibitions, so it was difficult to control the optimized experimental environment. So, the collected data has much noise, and it would results a lower accuracy. In further research, the data collection will be conducted in a more optimized experimental environment. The further research to increase the accuracy of the predictions of the model will be conducted. Second, using changes of facial expression only is thought to be not enough to extract audience emotions. If facial expression is combined with other responses, such as the sound, audience behavior, it would result a better result.