• Title/Summary/Keyword: Emotion Information

Search Result 1,329, Processing Time 0.026 seconds

Speech Emotion Recognition Based on GMM Using FFT and MFB Spectral Entropy (FFT와 MFB Spectral Entropy를 이용한 GMM 기반의 감정인식)

  • Lee, Woo-Seok;Roh, Yong-Wan;Hong, Hwang-Seok
    • Proceedings of the KIEE Conference
    • /
    • 2008.04a
    • /
    • pp.99-100
    • /
    • 2008
  • This paper proposes a Gaussian Mixture Model (GMM) - based speech emotion recognition methods using four feature parameters; 1) Fast Fourier Transform(FFT) spectral entropy, 2) delta FFT spectral entropy, 3) Mel-frequency Filter Bank (MFB) spectral entropy, and 4) delta MFB spectral entropy. In addition, we use four emotions in a speech database including anger, sadness, happiness, and neutrality. We perform speech emotion recognition experiments using each pre-defined emotion and gender. The experimental results show that the proposed emotion recognition using FFT spectral-based entropy and MFB spectral-based entropy performs better than existing emotion recognition based on GMM using energy, Zero Crossing Rate (ZCR), Linear Prediction Coefficient (LPC), and pitch parameters. In experimental Results, we attained a maximum recognition rate of 75.1% when we used MFB spectral entropy and delta MFB spectral entropy.

  • PDF

Analysis of Indirect Uses of Interrogative Sentences Carrying Anger

  • Min, Hye-Jin;Park, Jong-C.
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.311-320
    • /
    • 2007
  • Interrogative sentences are generally used to perform speech acts of directly asking a question or making a request, but they are also used to convey such speech acts indirectly. In the utterances, such indirect uses of interrogative sentences usually carry speaker's emotion with a negative attitude, which is close to an expression of anger. The identification of such negative emotion is known as a difficult problem that requires relevant information in syntax, semantics, discourse, pragmatics, and speech signals. In this paper, we argue that the interrogatives used for indirect speech acts could serve as a dominant marker for identifying the emotional attitudes, such as anger, as compared to other emotion-related markers, such as discourse markers, adverbial words, and syntactic markers. To support such an argument, we analyze the dialogues collected from the Korean soap operas, and examine individual or cooperative influences of the emotion-related markers on emotional realization. The user study shows that the interrogatives could be utilized as a promising device for emotion identification.

  • PDF

A Design and Implementation of Music & Image Retrieval Recommendation System based on Emotion (감성기반 음악.이미지 검색 추천 시스템 설계 및 구현)

  • Kim, Tae-Yeun;Song, Byoung-Ho;Bae, Sang-Hyun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.73-79
    • /
    • 2010
  • Emotion intelligence computing is able to processing of human emotion through it's studying and adaptation. Also, Be able more efficient to interaction of human and computer. As sight and hearing, music & image is constitute of short time and continue for long. Cause to success marketing, understand-translate of humanity emotion. In this paper, Be design of check system that matched music and image by user emotion keyword(irritability, gloom, calmness, joy). Suggested system is definition by 4 stage situations. Then, Using music & image and emotion ontology to retrieval normalized music & image. Also, A sampling of image peculiarity information and similarity measurement is able to get wanted result. At the same time, Matched on one space through pared correspondence analysis and factor analysis for classify image emotion recognition information. Experimentation findings, Suggest system was show 82.4% matching rate about 4 stage emotion condition.

Multimodal Attention-Based Fusion Model for Context-Aware Emotion Recognition

  • Vo, Minh-Cong;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.18 no.3
    • /
    • pp.11-20
    • /
    • 2022
  • Human Emotion Recognition is an exciting topic that has been attracting many researchers for a lengthy time. In recent years, there has been an increasing interest in exploiting contextual information on emotion recognition. Some previous explorations in psychology show that emotional perception is impacted by facial expressions, as well as contextual information from the scene, such as human activities, interactions, and body poses. Those explorations initialize a trend in computer vision in exploring the critical role of contexts, by considering them as modalities to infer predicted emotion along with facial expressions. However, the contextual information has not been fully exploited. The scene emotion created by the surrounding environment, can shape how people perceive emotion. Besides, additive fusion in multimodal training fashion is not practical, because the contributions of each modality are not equal to the final prediction. The purpose of this paper was to contribute to this growing area of research, by exploring the effectiveness of the emotional scene gist in the input image, to infer the emotional state of the primary target. The emotional scene gist includes emotion, emotional feelings, and actions or events that directly trigger emotional reactions in the input image. We also present an attention-based fusion network, to combine multimodal features based on their impacts on the target emotional state. We demonstrate the effectiveness of the method, through a significant improvement on the EMOTIC dataset.

Incomplete Cholesky Decomposition based Kernel Cross Modal Factor Analysis for Audiovisual Continuous Dimensional Emotion Recognition

  • Li, Xia;Lu, Guanming;Yan, Jingjie;Li, Haibo;Zhang, Zhengyan;Sun, Ning;Xie, Shipeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.810-831
    • /
    • 2019
  • Recently, continuous dimensional emotion recognition from audiovisual clues has attracted increasing attention in both theory and in practice. The large amount of data involved in the recognition processing decreases the efficiency of most bimodal information fusion algorithms. A novel algorithm, namely the incomplete Cholesky decomposition based kernel cross factor analysis (ICDKCFA), is presented and employed for continuous dimensional audiovisual emotion recognition, in this paper. After the ICDKCFA feature transformation, two basic fusion strategies, namely feature-level fusion and decision-level fusion, are explored to combine the transformed visual and audio features for emotion recognition. Finally, extensive experiments are conducted to evaluate the ICDKCFA approach on the AVEC 2016 Multimodal Affect Recognition Sub-Challenge dataset. The experimental results show that the ICDKCFA method has a higher speed than the original kernel cross factor analysis with the comparable performance. Moreover, the ICDKCFA method achieves a better performance than other common information fusion methods, such as the Canonical correlation analysis, kernel canonical correlation analysis and cross-modal factor analysis based fusion methods.

An Emotion Appraisal System Based on a Cognitive Context (인지적 맥락에 기반한 감정 평가 시스템)

  • Ahn, Hyun-Sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.1
    • /
    • pp.33-39
    • /
    • 2010
  • The interaction of emotion is an important factor in Human-Robot Interaction(HRI). This requires a contextual appraisal of emotion extracting the emotional information according to the events happened from past to present. In this paper an emotion appraisal system based on the cognitive context is presented. Firstly, a conventional emotion appraisal model is simplified to model a contextual emotion appraisal which defines the types of emotion appraisal, the target of the emotion induced from analyzing emotional verbs, and the transition of emotions in the context. We employ a language based cognitive system and its sentential memory and object descriptor to define the type and target of emotion and to evaluate the emotion varying with the process of time with the a priori emotional evaluation of targets. In a experimentation, we simulate the proposed emotion appraisal system with a scenario and show the feasibility of the system to HRI.

Classification and Intensity Assessment of Korean Emotion Expressing Idioms for Human Emotion Recognition

  • Park, Ji-Eun;Sohn, Sun-Ju;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.5
    • /
    • pp.617-627
    • /
    • 2012
  • Objective: The aim of the study was to develop a most widely used Korean dictionary of emotion expressing idioms. This is anticipated to assist the development of software technology that recognizes and responds to verbally expressed human emotions. Method: Through rigorous and strategic classification processes, idiomatic expressions included in this dictionary have been rated in terms of nine different emotions (i.e., happiness, sadness, fear, anger, surprise, disgust, interest, boredom, and pain) for meaning and intensity associated with each expression. Result: The Korean dictionary of emotion expression idioms included 427 expressions, with approximately two thirds classified as 'happiness'(n=96), 'sadness'(n=96), and 'anger'(n=90) emotions. Conclusion: The significance of this study primarily rests in the development of a practical language tool that contains Korean idiomatic expressions of emotions, provision of information on meaning and strength, and identification of idioms connoting two or more emotions. Application: Study findings can be utilized in emotion recognition research, particularly in identifying primary and secondary emotions as well as understanding intensity associated with various idioms used in emotion expressions. In clinical settings, information provided from this research may also enhance helping professionals' competence in verbally communicating patients' emotional needs.

Emotion-Based Control

  • Ko, Sung-Bum;Lim, Gi-Young
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.306-311
    • /
    • 2000
  • We, Human beings, use both powers of reason and emotion simultaneously, which surely help us to obtain flexible adaptability against the dynamic environment. We assert that this principle can be applied into the general system. That is, it would be possible to improve the adaptability by covering a digital oriented information processing system with analog oriented emotion layer. In this paper, we proposed a vertical slicing model with an emotion layer in it. And we showed that the emotion-based control allows us to improve the adaptability of a system at least under some conditions.

  • PDF

Generation of Character Emotion Based on User Interaction (사용자 상호작용 기반 캐릭터 emotion 생성)

  • 최은영;박혜정;박영택
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1999.10b
    • /
    • pp.113-115
    • /
    • 1999
  • 인터넷에서 제공하는 각종 서비스 및 다양한 소프트웨어들은 점차 사용자를 고려하여 개발되고 있는 추세이나 아직은 미약하다. 사용자를 고려한다는 것은 사용자가 소프트웨어를 사용하거나 서비스를 받으면서 공감을 느끼도록 하여 사용 효과를 높이고, 생활의 일부분이 되어 가고 있는 컴퓨터 시스템을 사람과 친하게 만들려는 것이다. 이를 위해서는 사용자와의 상호작용이 중요시된다. 본 연구에서는 사람과 비슷한 가상의 캐릭터를 이용하며, 이 캐릭터가 사용자와 상호 작용을 통하여 emotion을 갖게 하는데 중점을 두었다. 즉, 캐릭터가 가질 수 있는 emotion structure를 정의하고 사용자와의 상호작용을 바탕으로 캐릭터의 emotion을 생성한다. 이를 위한 시스템은 사용자가 에이전트에게 task를 요청하여 서비스를 받을 때까지 일어날 수 있는 여러 상호작용에 대하여 캐릭터의 emotion을 생성하여 사용자에게 simulation하게 된다. 이러한 감정의 교류를 통하여 사용자는 캐릭터에게 친근감을 갖게 되며 캐릭터의 emotion에 대하여 공감할 수 있고 응용프로그램의 신뢰성을 높이는 효과를 가져온다.

  • PDF

An Emotion Classification Based on Fuzzy Inference and Color Psychology

  • Son, Chang-Sik;Chung, Hwan-Mook
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.1
    • /
    • pp.18-22
    • /
    • 2004
  • It is difficult to understand a person's emotion, since it is subjective and vague. Therefore, we are proposing a method by which will effectively classify human emotions into two types (that is, single emotion and composition emotion). To verify validity of te proposed method, we conducted two experiments based on general inference and $\alpha$-cut, and compared the experimental results. In the first experiment emotions were classified according to fuzzy inference. On the other hand in the second experiment emotions were classified according to $\alpha$-cut. Our experimental results showed that the classification of emotion based on a- cut was more definite that that based on fuzzy inference.