• Title/Summary/Keyword: Facial emotion

Search Result 311, Processing Time 0.025 seconds

Image Recognition based on Adaptive Deep Learning (적응적 딥러닝 학습 기반 영상 인식)

  • Kim, Jin-Woo;Rhee, Phill-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.1
    • /
    • pp.113-117
    • /
    • 2018
  • Human emotions are revealed by various factors. Words, actions, facial expressions, attire and so on. But people know how to hide their feelings. So we can not easily guess its sensitivity using one factor. We decided to pay attention to behaviors and facial expressions in order to solve these problems. Behavior and facial expression can not be easily concealed without constant effort and training. In this paper, we propose an algorithm to estimate human emotion through combination of two results by gradually learning human behavior and facial expression with little data through the deep learning method. Through this algorithm, we can more comprehensively grasp human emotions.

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.

Recognition and Generation of Facial Expression for Human-Robot Interaction (로봇과 인간의 상호작용을 위한 얼굴 표정 인식 및 얼굴 표정 생성 기법)

  • Jung Sung-Uk;Kim Do-Yoon;Chung Myung-Jin;Kim Do-Hyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.3
    • /
    • pp.255-263
    • /
    • 2006
  • In the last decade, face analysis, e.g. face detection, face recognition, facial expression recognition, is a very lively and expanding research field. As computer animated agents and robots bring a social dimension to human computer interaction, interest in this research field is increasing rapidly. In this paper, we introduce an artificial emotion mimic system which can recognize human facial expressions and also generate the recognized facial expression. In order to recognize human facial expression in real-time, we propose a facial expression classification method that is performed by weak classifiers obtained by using new rectangular feature types. In addition, we make the artificial facial expression using the developed robotic system based on biological observation. Finally, experimental results of facial expression recognition and generation are shown for the validity of our robotic system.

Affective Computing in Education: Platform Analysis and Academic Emotion Classification

  • So, Hyo-Jeong;Lee, Ji-Hyang;Park, Hyun-Jin
    • International journal of advanced smart convergence
    • /
    • v.8 no.2
    • /
    • pp.8-17
    • /
    • 2019
  • The main purpose of this study isto explore the potential of affective computing (AC) platforms in education through two phases ofresearch: Phase I - platform analysis and Phase II - classification of academic emotions. In Phase I, the results indicate that the existing affective analysis platforms can be largely classified into four types according to the emotion detecting methods: (a) facial expression-based platforms, (b) biometric-based platforms, (c) text/verbal tone-based platforms, and (c) mixed methods platforms. In Phase II, we conducted an in-depth analysis of the emotional experience that a learner encounters in online video-based learning in order to establish the basis for a new classification system of online learner's emotions. Overall, positive emotions were shown more frequently and longer than negative emotions. We categorized positive emotions into three groups based on the facial expression data: (a) confidence; (b) excitement, enjoyment, and pleasure; and (c) aspiration, enthusiasm, and expectation. The same method was used to categorize negative emotions into four groups: (a) fear and anxiety, (b) embarrassment and shame, (c) frustration and alienation, and (d) boredom. Drawn from the results, we proposed a new classification scheme that can be used to measure and analyze how learners in online learning environments experience various positive and negative emotions with the indicators of facial expressions.

Energy-Efficient DNN Processor on Embedded Systems for Spontaneous Human-Robot Interaction

  • Kim, Changhyeon;Yoo, Hoi-Jun
    • Journal of Semiconductor Engineering
    • /
    • v.2 no.2
    • /
    • pp.130-135
    • /
    • 2021
  • Recently, deep neural networks (DNNs) are actively used for action control so that an autonomous system, such as the robot, can perform human-like behaviors and operations. Unlike recognition tasks, the real-time operation is essential in action control, and it is too slow to use remote learning on a server communicating through a network. New learning techniques, such as reinforcement learning (RL), are needed to determine and select the correct robot behavior locally. In this paper, we propose an energy-efficient DNN processor with a LUT-based processing engine and near-zero skipper. A CNN-based facial emotion recognition and an RNN-based emotional dialogue generation model is integrated for natural HRI system and tested with the proposed processor. It supports 1b to 16b variable weight bit precision with and 57.6% and 28.5% lower energy consumption than conventional MAC arithmetic units for 1b and 16b weight precision. Also, the near-zero skipper reduces 36% of MAC operation and consumes 28% lower energy consumption for facial emotion recognition tasks. Implemented in 65nm CMOS process, the proposed processor occupies 1784×1784 um2 areas and dissipates 0.28 mW and 34.4 mW at 1fps and 30fps facial emotion recognition tasks.

Analysis of children's Reaction in Facial Expression of Emotion (얼굴표정에서 나타나는 감정표현에 대한 어린이의 반응분석)

  • Yoo, Dong-Kwan
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.70-80
    • /
    • 2013
  • The purpose of this study has placed its meaning in the use as the basic material for the research of the person's facial expressions, by researching and analyzing the visual reactions of recognition of children according to the facial expressions of emotion and by surveying the verbal reactions of boys and girls according to the individual expressions of emotion. The subjects of this study were 108 children at the age of 6 - 8 (55 males, 53 females) who were able to understand the presented research tool, and the response survey conducted twice were used in the method of data collection by individual interviews and self administered questionnaires. The research tool using in the questionnaires were classified into 6 types of joy, sadness, anger, surprise, disgust, and fear which could derive the specific and accurate responses. Regarding children's visual reactions of recognition, both of boys and girls showed the high frequency in the facial expressions of joy, sadness, anger, surprise, and the low frequency in fear, disgust. Regarding verbal reactions, it showed the high frequency in the heuristic responses either to explore or the responds to the impressive parts reminiscent to the facial appearances in all the joy, sadness, anger, surprise, disgust, fear. And it came out that the imaginary responses created new stories reminiscent to the facial expression in surprise, disgust, and fear.

Local and Global Attention Fusion Network For Facial Emotion Recognition (얼굴 감정 인식을 위한 로컬 및 글로벌 어텐션 퓨전 네트워크)

  • Minh-Hai Tran;Tram-Tran Nguyen Quynh;Nhu-Tai Do;Soo-Hyung Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.493-495
    • /
    • 2023
  • Deep learning methods and attention mechanisms have been incorporated to improve facial emotion recognition, which has recently attracted much attention. The fusion approaches have improved accuracy by combining various types of information. This research proposes a fusion network with self-attention and local attention mechanisms. It uses a multi-layer perceptron network. The network extracts distinguishing characteristics from facial images using pre-trained models on RAF-DB dataset. We outperform the other fusion methods on RAD-DB dataset with impressive results.

Emotion Detection Model based on Sequential Neural Networks in Smart Exhibition Environment (스마트 전시환경에서 순차적 인공신경망에 기반한 감정인식 모델)

  • Jung, Min Kyu;Choi, Il Young;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.109-126
    • /
    • 2017
  • In the various kinds of intelligent services, many studies for detecting emotion are in progress. Particularly, studies on emotion recognition at the particular time have been conducted in order to provide personalized experiences to the audience in the field of exhibition though facial expressions change as time passes. So, the aim of this paper is to build a model to predict the audience's emotion from the changes of facial expressions while watching an exhibit. The proposed model is based on both sequential neural network and the Valence-Arousal model. To validate the usefulness of the proposed model, we performed an experiment to compare the proposed model with the standard neural-network-based model to compare their performance. The results confirmed that the proposed model considering time sequence had better prediction accuracy.

The Impact of Program Improvement Using Forest Healing Resources on the Therapeutic Effect: Focused on Improving Index of Greenness for Adolescents

  • Hwang, Joo-Ho;Lee, Hyo-Jung;Park, Jin-Hwa;Kim, Dong-Min;Lee, Kyoung-Min
    • Journal of People, Plants, and Environment
    • /
    • v.22 no.6
    • /
    • pp.691-698
    • /
    • 2019
  • This study is to examine the effect of improving the forest therapy program for adolescents using forest healing resources (focused on improving index of greenness for adolescents). The participants were 30 students from in the control group that participated in the 2018 program, and 51 students in experimental group that participated in the improved program in 2019. The questionnaire, developed by Korea Forest Welfare Institute, was comprised of items on general matters, index of greenness, restorative environment, positive emotion, negative emotion, facial expression and psychological assessment. The control group had 30 and the experimental group had 49 valid copies of the questionnaires. As a result of the paired sample t-test for each group, the control group showed a significant increase in all categories except restorative environment. In the experimental group, all categories significantly improved to a higher level (p <.01). An independent sample t-test (one-tailed test) was performed to test the effect of the forest therapy program with improved index of greenness. As a result, the index of greenness increased by 0.73 points(t=2.555, p <.01) and restorative environment by 1.01 points (t=2.567, p <.01), showing statistical significance. Negative emotion increased by 0.04 points (t=0.183, p >.05), which was not significant. On the other hand, positive emotion decreased by 0.42 points (t=-1.918, p <.05), facial expression by 0.57 points (t=-1.775, p <.05), and psychological assessment by 0.29 points (t=-0.981, p >.05), showing significance in positive emotion and facial expression. However, all the decreased items showed significant improvements between the pretest and posttest scores of the experimental group.

Happy Applicants Achieve More: Expressed Positive Emotions Captured Using an AI Interview Predict Performances

  • Shin, Ji-eun;Lee, Hyeonju
    • Science of Emotion and Sensibility
    • /
    • v.24 no.2
    • /
    • pp.75-80
    • /
    • 2021
  • Do happy applicants achieve more? Although it is well established that happiness predicts desirable work-related outcomes, previous findings were primarily obtained in social settings. In this study, we extended the scope of the "happiness premium" effect to the artificial intelligence (AI) context. Specifically, we examined whether an applicant's happiness signal captured using an AI system effectively predicts his/her objective performance. Data from 3,609 job applicants showed that verbally expressed happiness (frequency of positive words) during an AI interview predicts cognitive task scores, and this tendency was more pronounced among women than men. However, facially expressed happiness (frequency of smiling) recorded using AI could not predict the performance. Thus, when AI is involved in a hiring process, verbal rather than the facial cues of happiness provide a more valid marker for applicants' hiring chances.