• Title/Summary/Keyword: discrete emotion

Search Result 30, Processing Time 0.024 seconds

An Emotion Recognition Method using Facial Expression and Speech Signal (얼굴표정과 음성을 이용한 감정인식)

  • 고현주;이대종;전명근
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.6
    • /
    • pp.799-807
    • /
    • 2004
  • In this paper, we deal with an emotion recognition method using facial images and speech signal. Six basic human emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Emotion recognition using the facial expression is performed by using a multi-resolution analysis based on the discrete wavelet transform. And then, the feature vectors are extracted from the linear discriminant analysis method. On the other hand, the emotion recognition from speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and then the final recognition is obtained from a multi-decision making scheme.

Speaker-Dependent Emotion Recognition For Audio Document Indexing

  • Hung LE Xuan;QUENOT Georges;CASTELLI Eric
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.92-96
    • /
    • 2004
  • The researches of the emotions are currently great interest in speech processing as well as in human-machine interaction domain. In the recent years, more and more of researches relating to emotion synthesis or emotion recognition are developed for the different purposes. Each approach uses its methods and its various parameters measured on the speech signal. In this paper, we proposed using a short-time parameter: MFCC coefficients (Mel­Frequency Cepstrum Coefficients) and a simple but efficient classifying method: Vector Quantification (VQ) for speaker-dependent emotion recognition. Many other features: energy, pitch, zero crossing, phonetic rate, LPC... and their derivatives are also tested and combined with MFCC coefficients in order to find the best combination. The other models: GMM and HMM (Discrete and Continuous Hidden Markov Model) are studied as well in the hope that the usage of continuous distribution and the temporal behaviour of this set of features will improve the quality of emotion recognition. The maximum accuracy recognizing five different emotions exceeds $88\%$ by using only MFCC coefficients with VQ model. This is a simple but efficient approach, the result is even much better than those obtained with the same database in human evaluation by listening and judging without returning permission nor comparison between sentences [8]; And this result is positively comparable with the other approaches.

  • PDF

Korean Emotion Vocabulary: Extraction and Categorization of Feeling Words (한국어 감정표현단어의 추출과 범주화)

  • Sohn, Sun-Ju;Park, Mi-Sook;Park, Ji-Eun;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.15 no.1
    • /
    • pp.105-120
    • /
    • 2012
  • This study aimed to develop a Korean emotion vocabulary list that functions as an important tool in understanding human feelings. In doing so, the focus was on the careful extraction of most widely used feeling words, as well as categorization into groups of emotion(s) in relation to its meaning when used in real life. A total of 12 professionals (including Korean major graduate students) partook in the study. Using the Korean 'word frequency list' developed by Yonsei University and through various sorting processes, the study condensed the original 64,666 emotion words into a finalized 504 words. In the next step, a total of 80 social work students evaluated and classified each word for its meaning and into any of the following categories that seem most appropriate for inclusion: 'happiness', 'sadness', 'fear', 'anger', 'disgust', 'surprise', 'interest', 'boredom', 'pain', 'neutral', and 'other'. Findings showed that, of the 504 feeling words, 426 words expressed a single emotion, whereas 72 words reflected two emotions (i.e., same word indicating two distinct emotions), and 6 words showing three emotions. Of the 426 words that represent a single emotion, 'sadness' was predominant, followed by 'anger' and 'happiness'. Amongst 72 words that showed two emotions were mostly a combination of 'anger' and 'disgust', followed by 'sadness' and 'fear', and 'happiness' and 'interest'. The significance of the study is on the development of a most adaptive list of Korean feeling words that can be meticulously combined with other emotion signals such as facial expression in optimizing emotion recognition research, particularly in the Human-Computer Interface (HCI) area. The identification of feeling words that connote more than one emotion is also noteworthy.

  • PDF

DISCRIMINANT ANALYSIS OF LOGICAL RELATIONS

  • Osawa, Mitsuru
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.157-162
    • /
    • 2000
  • Discriminant analysis is a method to relate whether the objects have a specific characteristic or not with their 'continuous' attribute values and, for given objects, to estimate whether they have a specific characteristic or not by their values of discriminant scores gotten from their attribute values. The author developed the new 'computational' method of discriminant analysis without specific hypotheses or assumptions and, by this new method, we can find 'feasible' solutions under the conditions required by our actual problems. In this paper, the author tried to apply this new method to the discrimination of logical relations. If this trial could be a success, we can apply this new method of discriminant analysis to the problems about relating the specific characteristic of the objects with their 'discrete' attribute values. The result was successful and the applicability of discriminant analysis could be expanded as a method for constructing the models for "estimating impressions".

  • PDF

Automatic Emotion Classification of Music Signals Using MDCT-Driven Timbre and Tempo Features

  • Kim, Hyoung-Gook;Eom, Ki-Wan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.2E
    • /
    • pp.74-78
    • /
    • 2006
  • This paper proposes an effective method for classifying emotions of the music from its acoustical signals. Two feature sets, timbre and tempo, are directly extracted from the modified discrete cosine transform coefficients (MDCT), which are the output of partial MP3 (MPEG 1 Layer 3) decoder. Our tempo feature extraction method is based on the long-term modulation spectrum analysis. In order to effectively combine these two feature sets with different time resolution in an integrated system, a classifier with two layers based on AdaBoost algorithm is used. In the first layer the MDCT-driven timbre features are employed. By adding the MDCT-driven tempo feature in the second layer, the classification precision is improved dramatically.

Emotion recognition in speech using hidden Markov model (은닉 마르코프 모델을 이용한 음성에서의 감정인식)

  • 김성일;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.3
    • /
    • pp.21-26
    • /
    • 2002
  • This paper presents the new approach of identifying human emotional states such as anger, happiness, normal, sadness, or surprise. This is accomplished by using discrete duration continuous hidden Markov models(DDCHMM). For this, the emotional feature parameters are first defined from input speech signals. In this study, we used prosodic parameters such as pitch signals, energy, and their each derivative, which were then trained by HMM for recognition. Speaker adapted emotional models based on maximum a posteriori(MAP) estimation were also considered for speaker adaptation. As results, the simulation performance showed that the recognition rates of vocal emotion gradually increased with an increase of adaptation sample number.

  • PDF

Emotion Recognition of Korean and Japanese using Facial Images (얼굴영상을 이용한 한국인과 일본인의 감정 인식 비교)

  • Lee, Dae-Jong;Ahn, Ui-Sook;Park, Jang-Hwan;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.197-203
    • /
    • 2005
  • In this paper, we propose an emotion recognition using facial Images to effectively design human interface. Facial database consists of six basic human emotions including happiness, sadness, anger, surprise, fear and dislike which have been known as common emotions regardless of nation and culture. Emotion recognition for the facial images is performed after applying the discrete wavelet. Here, the feature vectors are extracted from the PCA and LDA. Experimental results show that human emotions such as happiness, sadness, and anger has better performance than surprise, fear and dislike. Expecially, Japanese shows lower performance for the dislike emotion. Generally, the recognition rates for Korean have higher values than Japanese cases.

Face Emotion Recognition by Fusion Model based on Static and Dynamic Image (정지영상과 동영상의 융합모델에 의한 얼굴 감정인식)

  • Lee Dae-Jong;Lee Kyong-Ah;Go Hyoun-Joo;Chun Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.573-580
    • /
    • 2005
  • In this paper, we propose an emotion recognition using static and dynamic facial images to effectively design human interface. The proposed method is constructed by HMM(Hidden Markov Model), PCA(Principal Component) and wavelet transform. Facial database consists of six basic human emotions including happiness, sadness, anger, surprise, fear and dislike which have been known as common emotions regardless of nation and culture. Emotion recognition in the static images is performed by using the discrete wavelet. Here, the feature vectors are extracted by using PCA. Emotion recognition in the dynamic images is performed by using the wavelet transform and PCA. And then, those are modeled by the HMM. Finally, we obtained better performance result from merging the recognition results for the static images and dynamic images.

Development of Psychophysiological Indices for Discrete Emotions (정서의 심리적.생리적 측정 및 지표개발: 기본정서 구분 모델)

  • 이경화;이임갑;손진훈
    • Science of Emotion and Sensibility
    • /
    • v.2 no.2
    • /
    • pp.43-52
    • /
    • 1999
  • 정서는 생리적 반응을 수반하는 주관적인 경험이다. 뇌파와 자율신경계 반응의 차이에 의한 기본정서 구분 연구는 보고된 바가 없다. 본 연구에서는 1) 여섯 기본정서를 뚜렷하게 유발하는 정서자극을 선정하고, 이를 사용하여 2) 기본정서를 구분할 수 있는 심리생리적 복합 지표 모델을 개발하고자 하였다. 국제정서사진체계에서 여섯 기본정서 (행복, 슬픔, 분노, 혐오, 공포, 놀람) 각각을 신뢰롭게 유발하는 여섯 쌍의 슬라이드를 선택하였다. 슬라이드 제시에 의하여 유발되는 뇌파, 심전도, 호흡, 피부전기반응을 기록하여 분석/비교하였다. 주요결과를 요약하면 다음과 같다. 첫째, 뇌파의 상태적 출현량, 심박률, 호흡률, 피부전도반응은 안정상태와 정서상태간의 비교에서 유의미한 차이가 나타났다. 둘째, 뇌파분석결과에서는 theta (F4, 01), slow alpha (F3, F4), fast alpha (O2), fast beta (F4, O2)파의 상대적 출현량 변화값이 일부 정서들간에 유의미한 차이가 있었다. 셋째, 자율신경계 분석결과에서는 심박률, 호흡률, 피부전도반응이 일부 정서들간에 유의미한 차이를 보여주었다. 이들 결과를 토대로 기본정서를 특정적으로 구분할 수 있는 심리생리적 복합 지표 모델을 구성하였다. 네 기본정서 (공포, 혐오, 슬픔, 분노)는 뇌파와 자율신경계 반응패턴에 의한 구분이 가능하였으나, 행복과 놀람은 본 연구에서 사용한 심리생리지표에 의한 최종 구분이 불가능하였다. 여섯 기본정서를 모두 구분할 수 있는 적절한 지표를 찾아내는 후속연구가 필요하다.

  • PDF

Patterns of Autonomic Responses to Affective Visual Stimulation: Skin Conductance Response, Heart Rate and Respiration Rate Vary Across Discrete Elicited-Emotions (정서시각자극에 의해 유발된 자율신경계 반응패턴: 유발정서에 따른 피부전도반응, 심박률 및 호흡률 변화)

  • ;Estate M. Sokhadze
    • Science of Emotion and Sensibility
    • /
    • v.1 no.1
    • /
    • pp.79-91
    • /
    • 1998
  • 이 연구의 목적은 IAPS(국제정저사진체계) 사진자극에 의해 유발된 각각의 주관적 정서상태에 특정적인 자율신경계 반응이 존재하는지를 규명하는 것이다. 부정적 정서(분노, 슬픔, 놀람)와 긍정적 정서(행복, 흥분)를 유발하는 IAPS사진을 각 60초 동안 제시하였을 때 유발되는 심박률, 호흡률, 피부전도반응을 측정하였다. 시각자극이 주어진 초리 30초 동안 통계적으로 유의미한 심박률 감속 및 호흡률 감소를 보여주었으며, 뚜렷한 피부전도반응이 출현하였다. 심박률 감속은 혐오보다 흥분에서 더 크게 나타났고, 피부전도반응의 진폭은 혐오보다 흥분에서 더 큰 것으로 나타났다. 한편, 피부전도반응의 진폭이 상승하는 시간은 슬픔, 행복, 놀람보다 혐오에서 더 짧아지는 경향을 보여주었다. 이와 같은 자율신경계 반응(심박률, 호흡률, 피부전도반응)은 정서상태간에 뚜렷한 차이를 보여주며, 특정 정서상태에서 자율신경계 반응은 개인차가 있기는 하지만 전체적으로 매우 전형적인 반응패턴을 보여주었다. 본 연구의 결과는 정서 특정적인 자율신경계 반응이 존재할 가능성을 시사해주며, 생리신호분석을 통해서 심리적 정서를 결정할 수 있는 형판(template)의 구성을 위해서 다양한 자율신경계 정서반응의 지표를 포괄적으로 측정 분석하는 후속연구가 요구된다.

  • PDF