• 제목/요약/키워드: Emotion Classification

검색결과 292건 처리시간 0.021초

Classification of Three Different Emotion by Physiological Parameters

  • Jang, Eun-Hye;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • 대한인간공학회지
    • /
    • 제31권2호
    • /
    • pp.271-279
    • /
    • 2012
  • Objective: This study classified three different emotional states(boredom, pain, and surprise) using physiological signals. Background: Emotion recognition studies have tried to recognize human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 122 college students participated in this experiment. Three different emotional stimuli were presented to participants and physiological signals, i.e., EDA(Electrodermal Activity), SKT(Skin Temperature), PPG(Photoplethysmogram), and ECG (Electrocardiogram) were measured for 1 minute as baseline and for 1~1.5 minutes during emotional state. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 features were extracted from these signals. Statistical analysis for emotion classification were done by DFA(discriminant function analysis) (SPSS 15.0) by using the difference values subtracting baseline values from the emotional state. Results: The result showed that physiological responses during emotional states were significantly differed as compared to during baseline. Also, an accuracy rate of emotion classification was 84.7%. Conclusion: Our study have identified that emotions were classified by various physiological signals. However, future study is needed to obtain additional signals from other modalities such as facial expression, face temperature, or voice to improve classification rate and to examine the stability and reliability of this result compare with accuracy of emotion classification using other algorithms. Application: This could help emotion recognition studies lead to better chance to recognize various human emotions by using physiological signals as well as is able to be applied on human-computer interaction system for emotion recognition. Also, it can be useful in developing an emotion theory, or profiling emotion-specific physiological responses as well as establishing the basis for emotion recognition system in human-computer interaction.

음성신호기반의 감정분석을 위한 특징벡터 선택 (Discriminative Feature Vector Selection for Emotion Classification Based on Speech)

  • 최하나;변성우;이석필
    • 전기학회논문지
    • /
    • 제64권9호
    • /
    • pp.1363-1368
    • /
    • 2015
  • Recently, computer form were smaller than before because of computing technique's development and many wearable device are formed. So, computer's cognition of human emotion has importantly considered, thus researches on analyzing the state of emotion are increasing. Human voice includes many information of human emotion. This paper proposes a discriminative feature vector selection for emotion classification based on speech. For this, we extract some feature vectors like Pitch, MFCC, LPC, LPCC from voice signals are divided into four emotion parts on happy, normal, sad, angry and compare a separability of the extracted feature vectors using Bhattacharyya distance. So more effective feature vectors are recommended for emotion classification.

얼굴 특징점 추적을 통한 사용자 감성 인식 (Emotion Recognition based on Tracking Facial Keypoints)

  • 이용환;김흥준
    • 반도체디스플레이기술학회지
    • /
    • 제18권1호
    • /
    • pp.97-101
    • /
    • 2019
  • Understanding and classification of the human's emotion play an important tasks in interacting with human and machine communication systems. This paper proposes a novel emotion recognition method by extracting facial keypoints, which is able to understand and classify the human emotion, using active Appearance Model and the proposed classification model of the facial features. The existing appearance model scheme takes an expression of variations, which is calculated by the proposed classification model according to the change of human facial expression. The proposed method classifies four basic emotions (normal, happy, sad and angry). To evaluate the performance of the proposed method, we assess the ratio of success with common datasets, and we achieve the best 93% accuracy, average 82.2% in facial emotion recognition. The results show that the proposed method effectively performed well over the emotion recognition, compared to the existing schemes.

Use of Word Clustering to Improve Emotion Recognition from Short Text

  • Yuan, Shuai;Huang, Huan;Wu, Linjing
    • Journal of Computing Science and Engineering
    • /
    • 제10권4호
    • /
    • pp.103-110
    • /
    • 2016
  • Emotion recognition is an important component of affective computing, and is significant in the implementation of natural and friendly human-computer interaction. An effective approach to recognizing emotion from text is based on a machine learning technique, which deals with emotion recognition as a classification problem. However, in emotion recognition, the texts involved are usually very short, leaving a very large, sparse feature space, which decreases the performance of emotion classification. This paper proposes to resolve the problem of feature sparseness, and largely improve the emotion recognition performance from short texts by doing the following: representing short texts with word cluster features, offering a novel word clustering algorithm, and using a new feature weighting scheme. Emotion classification experiments were performed with different features and weighting schemes on a publicly available dataset. The experimental results suggest that the word cluster features and the proposed weighting scheme can partly resolve problems with feature sparseness and emotion recognition performance.

프로파일기반의 FLD와 단계적 분류를 이용한 감성 인식 기법 (Emotion Recognition Method Using FLD and Staged Classification Based on Profile Data)

  • 김재협;오나래;전갑송;문영식
    • 전자공학회논문지CI
    • /
    • 제48권6호
    • /
    • pp.35-46
    • /
    • 2011
  • 본 논문에서는 피셔 선형 분리(FLD, Fisher's Linear Discriminant) 기반의 단계적 분류를 이용한 감성 인식 기법을 제안한다. 제안하는 기법은 2종 이상의 감성에 대한 다중 클래스 분류 문제에 대하여, 이진 분류 모델의 연속적인 결합을 통해 단계적 분류 모델을 구성함으로써 복잡도 높은 특징 공간상의 다수의 감성 클래스에 대한 분류 성능을 향상시킨다. 이를 위하여, 각 계층 단계의 학습에서는 감성 클래스들로 이루어진 두 개의 클래스 그룹에 따라 피셔 선형분리 공간을 구성하며, 구성된 공간상에서 Adaboost 방식을 이용하여 이진 분류 모델을 학습하여 생성한다. 각 계층 단계의 학습 과정은 모든 감성 클래스가 구분이 완료되는 시점까지 반복 수행된다. 본 논문에서는 MIT 생체 신호 프로파일을 이용하여 제안하는 기법을 실험하였다. 실험 결과, 8종의 감성에 대한 분류 실험을 통해 약 72%의 분류 성능을 확인하였고, 특정 3종의 감성에 대한 분류 실험을 통해 약 93% 분류 성능을 확인하였다.

감정 분류를 위한 한국어 감정 자질 추출 기법과 감정 자질의 유용성 평가 (A Korean Emotion Features Extraction Method and Their Availability Evaluation for Sentiment Classification)

  • 황재원;고영중
    • 인지과학
    • /
    • 제19권4호
    • /
    • pp.499-517
    • /
    • 2008
  • 본 논문에서는 한국어 감정 분류에 기반이 되는 감정 자질 추출의 효과적인 추출 방법을 제안하고 평가하여, 그 유용성을 보인다. 한국어 감정 자질 추출은 감정을 지닌 대표적인 어휘로부터 시작하여 확장할 수 있으며, 이와 같이 추출된 감정 자질들은 문서의 감정을 분류하는데 중요한 역할을 한다. 문서 감정 분류에 핵심이 되는 감정 자질의 추출을 위해서는 영어 단어 시소러스 유의어 정보를 이용하여 자질들을 확장하고, 영한사전을 이용하여 확장된 자질들을 번역하여 감정 자질들을 추출하였다. 추출된 한국어 감정 자질들을 평가하기 위하여, 이진 분류 기법인 지지 벡터 기계(Support Vector Machine)를 사용해서 한국어 감정 자질로 표현된 입력문서의 감정을 분류하였다. 실험 결과, 추출된 감정 자질을 사용한 경우가 일반적인 정보 검색에서 사용하는 내용어(Content Word) 기반의 자질을 사용한 경우보다 약 14.1%의 성능 향상을 보였다.

  • PDF

An Emotion Classification Based on Fuzzy Inference and Color Psychology

  • Son, Chang-Sik;Chung, Hwan-Mook
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제4권1호
    • /
    • pp.18-22
    • /
    • 2004
  • It is difficult to understand a person's emotion, since it is subjective and vague. Therefore, we are proposing a method by which will effectively classify human emotions into two types (that is, single emotion and composition emotion). To verify validity of te proposed method, we conducted two experiments based on general inference and $\alpha$-cut, and compared the experimental results. In the first experiment emotions were classified according to fuzzy inference. On the other hand in the second experiment emotions were classified according to $\alpha$-cut. Our experimental results showed that the classification of emotion based on a- cut was more definite that that based on fuzzy inference.

Attention-based CNN-BiGRU for Bengali Music Emotion Classification

  • Subhasish Ghosh;Omar Faruk Riad
    • International Journal of Computer Science & Network Security
    • /
    • 제23권9호
    • /
    • pp.47-54
    • /
    • 2023
  • For Bengali music emotion classification, deep learning models, particularly CNN and RNN are frequently used. But previous researches had the flaws of low accuracy and overfitting problem. In this research, attention-based Conv1D and BiGRU model is designed for music emotion classification and comparative experimentation shows that the proposed model is classifying emotions more accurate. We have proposed a Conv1D and Bi-GRU with the attention-based model for emotion classification of our Bengali music dataset. The model integrates attention-based. Wav preprocessing makes use of MFCCs. To reduce the dimensionality of the feature space, contextual features were extracted from two Conv1D layers. In order to solve the overfitting problems, dropouts are utilized. Two bidirectional GRUs networks are used to update previous and future emotion representation of the output from the Conv1D layers. Two BiGRU layers are conntected to an attention mechanism to give various MFCC feature vectors more attention. Moreover, the attention mechanism has increased the accuracy of the proposed classification model. The vector is finally classified into four emotion classes: Angry, Happy, Relax, Sad; using a dense, fully connected layer with softmax activation. The proposed Conv1D+BiGRU+Attention model is efficient at classifying emotions in the Bengali music dataset than baseline methods. For our Bengali music dataset, the performance of our proposed model is 95%.