• Title/Summary/Keyword: Learning Emotion

Search Result 412, Processing Time 0.027 seconds

Maximum Entropy-based Emotion Recognition Model using Individual Average Difference (개인별 평균차를 이용한 최대 엔트로피 기반 감성 인식 모델)

  • Park, So-Young;Kim, Dong-Keun;Whang, Min-Cheol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.7
    • /
    • pp.1557-1564
    • /
    • 2010
  • In this paper, we propose a maximum entropy-based emotion recognition model using the individual average difference of emotional signal, because an emotional signal pattern depends on each individual. In order to accurately recognize a user's emotion, the proposed model utilizes the difference between the average of the input emotional signals and the average of each emotional state's signals(such as positive emotional signals and negative emotional signals), rather than only the given input signal. With the aim of easily constructing the emotion recognition model without the professional knowledge of the emotion recognition, it utilizes a maximum entropy model, one of the best-performed and well-known machine learning techniques. Considering that it is difficult to obtain enough training data based on the numerical value of emotional signal for machine learning, the proposed model substitutes two simple symbols such as +(positive number)/-(negative number) for every average difference value, and calculates the average of emotional signals per second rather than the total emotion response time(10 seconds).

Design Development of the Child-Oriented Furniture for Playing & Learning (놀이와 학습을 위한 아동용가구의 디자인방향 모색)

  • Lee, Mi-Hye;Yang, Seung-Hee
    • Journal of the Korea Furniture Society
    • /
    • v.19 no.5
    • /
    • pp.341-349
    • /
    • 2008
  • This study intends to analyze the importance of design on the basis of the children' emotion, by instancing the child-oriented furniture that contributes to the healthy growing and emotion-development of children. This means the reflection of the will trying to see the main point of design for the child-oriented furniture and it's possibility from another new standpoint. It will be understood how far the furniture influences the children and for that the furniture among others for the preschool children having playing & leaning functions is taken as an object of the study. The scope of study is the child-oriented furniture having playing & learning functions that has been presented for displaying as well as for a commercial use, since 2005. The attempt to find objective factors working positively for the emotion- & behavior development of children through seeking a new design of the child-oriented furniture for playing & learning is for emphasizing the importance of the emotional function, not only the primary function of furniture, at designing the child-oriented furniture. The combination between applications of the specialized material for child education & studies, therefore, has to be handled more importantly. The child-oriented furniture for playing & leaning that stimulates a healthy growing of child not only physically, also emotionally should be constantly and more deeply specialized on child education and design aspects.

  • PDF

Emotion Recognition and Expression System of Robot Based on 2D Facial Image (2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템)

  • Lee, Dong-Hoon;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.4
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

Emotion Coding of Sijo Crying Cuckoo at the Empty Mountain (시조 「공산에 우는 접동」의 감정 코딩)

  • Park, Inkwa
    • The Journal of the Convergence on Culture Technology
    • /
    • v.5 no.1
    • /
    • pp.13-20
    • /
    • 2019
  • This study aims to study the codes that can code the Sijo's emotional codes into AI and use them in literature therapy. In this study, we conducted emotional coding of the Sijo Crying Cuckoo at the Empty Mountain. As a result, the Emotion Codon was able to indicate the state of sadness catharsis. This implanting of the Sijo's emotional codes into Emotion Codon is like implanting human emotions into AI. If the basic emotion codes are implanted in the Emotion Codon and induced of AI's self-learning, We think AI can combine various emotions that occur in the human body. AI can then replace human emotions, which can be useful in treating of human emotions. It is believed that continuing this study will induce human emotions to heal the mind and spirit.

Deep Reinforcement Learning-Based Cooperative Robot Using Facial Feedback (표정 피드백을 이용한 딥강화학습 기반 협력로봇 개발)

  • Jeon, Haein;Kang, Jeonghun;Kang, Bo-Yeong
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.264-272
    • /
    • 2022
  • Human-robot cooperative tasks are increasingly required in our daily life with the development of robotics and artificial intelligence technology. Interactive reinforcement learning strategies suggest that robots learn task by receiving feedback from an experienced human trainer during a training process. However, most of the previous studies on Interactive reinforcement learning have required an extra feedback input device such as a mouse or keyboard in addition to robot itself, and the scenario where a robot can interactively learn a task with human have been also limited to virtual environment. To solve these limitations, this paper studies training strategies of robot that learn table balancing tasks interactively using deep reinforcement learning with human's facial expression feedback. In the proposed system, the robot learns a cooperative table balancing task using Deep Q-Network (DQN), which is a deep reinforcement learning technique, with human facial emotion expression feedback. As a result of the experiment, the proposed system achieved a high optimal policy convergence rate of up to 83.3% in training and successful assumption rate of up to 91.6% in testing, showing improved performance compared to the model without human facial expression feedback.

Analyzing the Acoustic Elements and Emotion Recognition from Speech Signal Based on DRNN (음향적 요소분석과 DRNN을 이용한 음성신호의 감성 인식)

  • Sim, Kwee-Bo;Park, Chang-Hyun;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.1
    • /
    • pp.45-50
    • /
    • 2003
  • Recently, robots technique has been developed remarkably. Emotion recognition is necessary to make an intimate robot. This paper shows the simulator and simulation result which recognize or classify emotions by learning pitch pattern. Also, because the pitch is not sufficient for recognizing emotion, we added acoustic elements. For that reason, we analyze the relation between emotion and acoustic elements. The simulator is composed of the DRNN(Dynamic Recurrent Neural Network), Feature extraction. DRNN is a learning algorithm for pitch pattern.

Multimodal Attention-Based Fusion Model for Context-Aware Emotion Recognition

  • Vo, Minh-Cong;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.18 no.3
    • /
    • pp.11-20
    • /
    • 2022
  • Human Emotion Recognition is an exciting topic that has been attracting many researchers for a lengthy time. In recent years, there has been an increasing interest in exploiting contextual information on emotion recognition. Some previous explorations in psychology show that emotional perception is impacted by facial expressions, as well as contextual information from the scene, such as human activities, interactions, and body poses. Those explorations initialize a trend in computer vision in exploring the critical role of contexts, by considering them as modalities to infer predicted emotion along with facial expressions. However, the contextual information has not been fully exploited. The scene emotion created by the surrounding environment, can shape how people perceive emotion. Besides, additive fusion in multimodal training fashion is not practical, because the contributions of each modality are not equal to the final prediction. The purpose of this paper was to contribute to this growing area of research, by exploring the effectiveness of the emotional scene gist in the input image, to infer the emotional state of the primary target. The emotional scene gist includes emotion, emotional feelings, and actions or events that directly trigger emotional reactions in the input image. We also present an attention-based fusion network, to combine multimodal features based on their impacts on the target emotional state. We demonstrate the effectiveness of the method, through a significant improvement on the EMOTIC dataset.

Affording Emotional Regulation of Distant Collaborative Argumentation-Based Learning at University

  • POLO, Claire;SIMONIAN, Stephane;CHAKER, Rawad
    • Educational Technology International
    • /
    • v.23 no.1
    • /
    • pp.1-39
    • /
    • 2022
  • We study emotion regulation in a distant CABLe (Collaborative Argumentation Based-Learning) setting at university. We analyze how students achieve the group task of synthesizing the literature on a topic through scientific argumentation on the institutional Moodle's forum. Distinguishing anticipatory from reactive emotional regulation shows how essential it is to establish and maintain a constructive working climate in order to make the best out of disagreement both on social and cognitive planes. We operationalize the analysis of anticipatory emotional regulation through an analytical grid applied to the data of two groups of students facing similar disagreement. Thanks to sharp anticipatory regulation, group 1 solved the conflict both on the social and the cognitive plane, while group 2 had to call out for external regulation by the teacher, stuck in a cyclically resurfacing dispute. While the institutional digital environment did afford anticipatory emotional regulation, reactive emotional regulation rather occurred through complementary informal and synchronous communication tools. Based on these qualitative case studies, we draw recommendations for fostering distant CABLe at university.

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

Automatic Recognition in the Level of Arousal using SOM (SOM 이용한 각성수준의 자동인식)

  • Jeong, Chan-Soon;Ham, Jun-Seok;Ko, Il-Ju
    • Science of Emotion and Sensibility
    • /
    • v.14 no.2
    • /
    • pp.197-206
    • /
    • 2011
  • The purpose of the study was to suggest automatic recognition of the subject's level of arousal into high arousal and low arousal with neural network SOM learning. The automatic recognition in the level of arousal is composed of three stages. First, it is a stage of ECG measurement and analysis. It measures the subject playing a shooting game with ECG and extracts characteristics for SOM learning. Second, it is a stage of SOM learning. It learns input vectors extracting characteristics. Finally, it is a stage of arousal recognition which recognize the subject's level of arousal when new vectors are input after SOM learning is completed. The study expresses recognition results in the level of arousal and the level of arousal in numerical value and graph when SOM learning results in the level of arousal and new vectors are input. Finally, SOM evaluation was analyzed average 86% by comparing emotion evaluation results of the existing research with automatic recognition results of SOM in order. The study could experience automatic recognition with other levels of arousal by each subject with SOM.

  • PDF