• Title/Summary/Keyword: Emotion-Recognition

Search Result 653, Processing Time 0.025 seconds

An Emotion Recognition Method using Facial Expression and Speech Signal (얼굴표정과 음성을 이용한 감정인식)

  • 고현주;이대종;전명근
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.6
    • /
    • pp.799-807
    • /
    • 2004
  • In this paper, we deal with an emotion recognition method using facial images and speech signal. Six basic human emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Emotion recognition using the facial expression is performed by using a multi-resolution analysis based on the discrete wavelet transform. And then, the feature vectors are extracted from the linear discriminant analysis method. On the other hand, the emotion recognition from speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and then the final recognition is obtained from a multi-decision making scheme.

Difference of Facial Emotion Recognition and Discrimination between Children with Attention-Deficit Hyperactivity Disorder and Autism Spectrum Disorder (주의력결핍과잉행동장애 아동과 자폐스펙트럼장애 아동에서 얼굴 표정 정서 인식과 구별의 차이)

  • Lee, Ji-Seon;Kang, Na-Ri;Kim, Hui-Jeong;Kwak, Young-Sook
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.27 no.3
    • /
    • pp.207-215
    • /
    • 2016
  • Objectives: This study aimed to investigate the differences in the facial emotion recognition and discrimination ability between children with attention-deficit hyperactivity disorder (ADHD) and autism spectrum disorder (ASD). Methods: Fifty-three children aged 7 to 11 years participated in this study. Among them, 43 were diagnosed with ADHD and 10 with ASD. The parents of the participants completed the Korean version of the Child Behavior Checklist, ADHD Rating Scale and Conner's scale. The participants completed the Korean Wechsler Intelligence Scale for Children-fourth edition and Advanced Test of Attention (ATA), Penn Emotion Recognition Task and Penn Emotion Discrimination Task. The group differences in the facial emotion recognition and discrimination ability were analyzed by using analysis of covariance for the purpose of controlling the visual omission error index of ATA. Results: The children with ADHD showed better recognition of happy and sad faces and less false positive neutral responses than those with ASD. Also, the children with ADHD recognized emotions better than those with ASD on female faces and in extreme facial expressions, but not on male faces or in mild facial expressions. We found no differences in the facial emotion discrimination between the children with ADHD and ASD. Conclusion: Our results suggest that children with ADHD recognize facial emotions better than children with ASD, but they still have deficits. Interventions which consider their different emotion recognition and discrimination abilities are needed.

Alexithymia and the Recognition of Facial Emotion in Schizophrenic Patients (정신분열병 환자에서의 감정표현불능증과 얼굴정서인식결핍)

  • Noh, Jin-Chan;Park, Sung-Hyouk;Kim, Kyung-Hee;Kim, So-Yul;Shin, Sung-Woong;Lee, Koun-Seok
    • Korean Journal of Biological Psychiatry
    • /
    • v.18 no.4
    • /
    • pp.239-244
    • /
    • 2011
  • Objectives Schizophrenic patients have been shown to be impaired in both emotional self-awareness and recognition of others' facial emotions. Alexithymia refers to the deficits in emotional self-awareness. The relationship between alexithymia and recognition of others' facial emotions needs to be explored to better understand the characteristics of emotional deficits in schizophrenic patients. Methods Thirty control subjects and 31 schizophrenic patients completed the Toronto Alexithymia Scale-20-Korean version (TAS-20K) and facial emotion recognition task. The stimuli in facial emotion recognition task consist of 6 emotions (happiness, sadness, anger, fear, disgust, and neutral). Recognition accuracy was calculated within each emotion category. Correlations between TAS-20K and recognition accuracy were analyzed. Results The schizophrenic patients showed higher TAS-20K scores and lower recognition accuracy compared with the control subjects. The schizophrenic patients did not demonstrate any significant correlations between TAS-20K and recognition accuracy, unlike the control subjects. Conclusions The data suggest that, although schizophrenia may impair both emotional self-awareness and recognition of others' facial emotions, the degrees of deficit can be different between emotional self-awareness and recognition of others' facial emotions. This indicates that the emotional deficits in schizophrenia may assume more complex features.

Noise Robust Emotion Recognition Feature : Frequency Range of Meaningful Signal (음성의 특정 주파수 범위를 이용한 잡음환경에서의 감정인식)

  • Kim Eun-Ho;Hyun Kyung-Hak;Kwak Yoon-Keun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.23 no.5 s.182
    • /
    • pp.68-76
    • /
    • 2006
  • The ability to recognize human emotion is one of the hallmarks of human-robot interaction. Hence this paper describes the realization of emotion recognition. For emotion recognition from voice, we propose a new feature called frequency range of meaningful signal. With this feature, we reached average recognition rate of 76% in speaker-dependent. From the experimental results, we confirm the usefulness of the proposed feature. We also define the noise environment and conduct the noise-environment test. In contrast to other features, the proposed feature is robust in a noise-environment.

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

Pattern Recognition Methods for Emotion Recognition with speech signal

  • Park Chang-Hyun;Sim Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.2
    • /
    • pp.150-154
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition are determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section.

Difficulty in Facial Emotion Recognition in Children with ADHD (주의력결핍 과잉행동장애의 이환 여부에 따른 얼굴표정 정서 인식의 차이)

  • An, Na Young;Lee, Ju Young;Cho, Sun Mi;Chung, Young Ki;Shin, Yun Mi
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.24 no.2
    • /
    • pp.83-89
    • /
    • 2013
  • Objectives : It is known that children with attention-deficit hyperactivity disorder (ADHD) experience significant difficulty in recognizing facial emotion, which involves processing of emotional facial expressions rather than speech, compared to children without ADHD. This objective of this study is to investigate the differences in facial emotion recognition between children with ADHD and normal children used as control. Methods : The children for our study were recruited from the Suwon Project, a cohort comprising a non-random convenience sample of 117 nine-year-old ethnic Koreans. The parents of the study participants completed study questionnaires such as the Korean version of Child Behavior Checklist, ADHD Rating Scale, Kiddie-Schedule for Affective Disorders and Schizophrenia-Present and Lifetime Version. Facial Expression Recognition Test of the Emotion Recognition Test was used for the evaluation of facial emotion recognition and ADHD Rating Scale was used for the assessment of ADHD. Results : ADHD children (N=10) were found to have impaired recognition when it comes to Emotional Differentiation and Contextual Understanding compared with normal controls (N=24). We found no statistically significant difference in the recognition of positive facial emotions (happy and surprise) and negative facial emotions (anger, sadness, disgust and fear) between the children with ADHD and normal children. Conclusion : The results of our study suggested that facial emotion recognition may be closely associated with ADHD, after controlling for covariates, although more research is needed.

An Emotion Recognition Technique using Speech Signals (음성신호를 이용한 감정인식)

  • Jung, Byung-Wook;Cheun, Seung-Pyo;Kim, Youn-Tae;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.494-500
    • /
    • 2008
  • In the field of development of human interface technology, the interactions between human and machine are important. The research on emotion recognition helps these interactions. This paper presents an algorithm for emotion recognition based on personalized speech signals. The proposed approach is trying to extract the characteristic of speech signal for emotion recognition using PLP (perceptual linear prediction) analysis. The PLP analysis technique was originally designed to suppress speaker dependent components in features used for automatic speech recognition, but later experiments demonstrated the efficiency of their use for speaker recognition tasks. So this paper proposed an algorithm that can easily evaluate the personal emotion from speech signals in real time using personalized emotion patterns that are made by PLP analysis. The experimental results show that the maximum recognition rate for the speaker dependant system is above 90%, whereas the average recognition rate is 75%. The proposed system has a simple structure and but efficient to be used in real time.

Emotion Recognition based on Multiple Modalities

  • Kim, Dong-Ju;Lee, Hyeon-Gu;Hong, Kwang-Seok
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.4
    • /
    • pp.228-236
    • /
    • 2011
  • Emotion recognition plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between humans and computer. Most of previous work on emotion recognition focused on extracting emotions from face, speech or EEG information separately. Therefore, a novel approach is presented in this paper, including face, speech and EEG, to recognize the human emotion. The individual matching scores obtained from face, speech, and EEG are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. In the experiment results, the proposed approach gives an improvement of more than 18.64% when compared to the most successful unimodal approach, and also provides better performance compared to approaches integrating two modalities each other. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

Emotion Recognition Method for Driver Services

  • Kim, Ho-Duck;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.4
    • /
    • pp.256-261
    • /
    • 2007
  • Electroencephalographic(EEG) is used to record activities of human brain in the area of psychology for many years. As technology developed, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study Emotion Recognition method which uses one of EEG signals and Gestures in the existing research. In this paper, we use together EEG signals and Gestures for Emotion Recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both EEG signals and gestures gets high recognition rates better than using EEG signals or gestures. Both EEG signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on the reinforcement learning.