• Title/Summary/Keyword: Korean Emotion Feature

Search Result 169, Processing Time 0.027 seconds

Emotion Recognition based on Tracking Facial Keypoints (얼굴 특징점 추적을 통한 사용자 감성 인식)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.1
    • /
    • pp.97-101
    • /
    • 2019
  • Understanding and classification of the human's emotion play an important tasks in interacting with human and machine communication systems. This paper proposes a novel emotion recognition method by extracting facial keypoints, which is able to understand and classify the human emotion, using active Appearance Model and the proposed classification model of the facial features. The existing appearance model scheme takes an expression of variations, which is calculated by the proposed classification model according to the change of human facial expression. The proposed method classifies four basic emotions (normal, happy, sad and angry). To evaluate the performance of the proposed method, we assess the ratio of success with common datasets, and we achieve the best 93% accuracy, average 82.2% in facial emotion recognition. The results show that the proposed method effectively performed well over the emotion recognition, compared to the existing schemes.

Development of Context Awareness and Service Reasoning Technique for Handicapped People (멀티 모달 감정인식 시스템 기반 상황인식 서비스 추론 기술 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.34-39
    • /
    • 2009
  • As a subjective recognition effect, human's emotion has impulsive characteristic and it expresses intentions and needs unconsciously. These are pregnant with information of the context about the ubiquitous computing environment or intelligent robot systems users. Such indicators which can aware the user's emotion are facial image, voice signal, biological signal spectrum and so on. In this paper, we generate the each result of facial and voice emotion recognition by using facial image and voice for the increasing convenience and efficiency of the emotion recognition. Also, we extract the feature which is the best fit information based on image and sound to upgrade emotion recognition rate and implement Multi-Modal Emotion recognition system based on feature fusion. Eventually, we propose the possibility of the ubiquitous computing service reasoning method based on Bayesian Network and ubiquitous context scenario in the ubiquitous computing environment by using result of emotion recognition.

Multi-Emotion Regression Model for Recognizing Inherent Emotions in Speech Data (음성 데이터의 내재된 감정인식을 위한 다중 감정 회귀 모델)

  • Moung Ho Yi;Myung Jin Lim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.81-88
    • /
    • 2023
  • Recently, communication through online is increasing due to the spread of non-face-to-face services due to COVID-19. In non-face-to-face situations, the other person's opinions and emotions are recognized through modalities such as text, speech, and images. Currently, research on multimodal emotion recognition that combines various modalities is actively underway. Among them, emotion recognition using speech data is attracting attention as a means of understanding emotions through sound and language information, but most of the time, emotions are recognized using a single speech feature value. However, because a variety of emotions exist in a complex manner in a conversation, a method for recognizing multiple emotions is needed. Therefore, in this paper, we propose a multi-emotion regression model that extracts feature vectors after preprocessing speech data to recognize complex, inherent emotions and takes into account the passage of time.

The Pattern Recognition Methods for Emotion Recognition with Speech Signal (음성신호를 이용한 감성인식에서의 패턴인식 방법)

  • Park Chang-Hyeon;Sim Gwi-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.347-350
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition is determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section. Truly, emotion recognition technique is not mature. That is, the emotion feature selection, relevant classification method selection, all these problems are disputable. So, we wish this paper to be a reference for the disputes.

  • PDF

Speech Emotion Recognition Based on GMM Using FFT and MFB Spectral Entropy (FFT와 MFB Spectral Entropy를 이용한 GMM 기반의 감정인식)

  • Lee, Woo-Seok;Roh, Yong-Wan;Hong, Hwang-Seok
    • Proceedings of the KIEE Conference
    • /
    • 2008.04a
    • /
    • pp.99-100
    • /
    • 2008
  • This paper proposes a Gaussian Mixture Model (GMM) - based speech emotion recognition methods using four feature parameters; 1) Fast Fourier Transform(FFT) spectral entropy, 2) delta FFT spectral entropy, 3) Mel-frequency Filter Bank (MFB) spectral entropy, and 4) delta MFB spectral entropy. In addition, we use four emotions in a speech database including anger, sadness, happiness, and neutrality. We perform speech emotion recognition experiments using each pre-defined emotion and gender. The experimental results show that the proposed emotion recognition using FFT spectral-based entropy and MFB spectral-based entropy performs better than existing emotion recognition based on GMM using energy, Zero Crossing Rate (ZCR), Linear Prediction Coefficient (LPC), and pitch parameters. In experimental Results, we attained a maximum recognition rate of 75.1% when we used MFB spectral entropy and delta MFB spectral entropy.

  • PDF

A Study Regarding Head Image′s Through Fashion Collection (패션컬렉션에 나타난 Head Image 연구)

  • 김애경;이경희
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.27 no.8
    • /
    • pp.904-912
    • /
    • 2003
  • This study for‘Head Image’, which is affected by individual Image, is via fashion collection to analyze formative feature, fashion emotion and meaning structure of emotion and to inquire into correlation. I will offer fundamental data, which is can use Image making from the state of thing. First, to make charm and personal image, if we consider Head image well, it will very effective by the reason that personality and charm operate as important factors in fashion sensibility of Head Image. Second, we can know Head Image has more strong influence the part of emotion than fashion sensibility by showing that the sense of emotion is higher than this point of view of fashion sensibility in Head Image. Third, As a result of the correlation of fashion sensibility and emotion in Head Image, personal Head Image is effective to attract public gaze by causing negative emotion, and attractive Head Image is effective to give pleasant feeling by causing positive emotion. Forth, Avant-garde, Punk, Kitsch Image were estimated as the most personal things and Romantic, Ethnic Image were estimated as the most attractive things of the type of Head Image. Natural Image was estimated as the most feminine thing, and Elegant Image was estimated as the most mature thing. Fifth, when we look into the different appraisals between experts and amateurs about fashion sensibility and emotion of Head Image, a selection of experts are used to peculiar and strong Head Image, so amateurs respond it more sensitively and highly evaluate.

Emotion Recognition using Prosodic Feature Vector and Gaussian Mixture Model (운율 특성 벡터와 가우시안 혼합 모델을 이용한 감정인식)

  • Kwak, Hyun-Suk;Kim, Soo-Hyun;Kwak, Yoon-Keun
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2002.11b
    • /
    • pp.762-766
    • /
    • 2002
  • This paper describes the emotion recognition algorithm using HMM(Hidden Markov Model) method. The relation between the mechanic system and the human has just been unilateral so far. This is the why people don't want to get familiar with multi-service robots of today. If the function of the emotion recognition is granted to the robot system, the concept of the mechanic part will be changed a lot. Pitch and Energy extracted from the human speech are good and important factors to classify the each emotion (neutral, happy, sad and angry etc.), which are called prosodic features. HMM is the powerful and effective theory among several methods to construct the statistical model with characteristic vector which is made up with the mixture of prosodic features

  • PDF

Sex Differences in the memories for emotional stimuli (정서적 자극에 대한 기억에 있어서의 남녀 차이에 관한 연구)

  • 박수애;안진경
    • Science of Emotion and Sensibility
    • /
    • v.6 no.3
    • /
    • pp.29-43
    • /
    • 2003
  • This study examined the difference in memories for emotional stimuli. After giving participants the memory task instruction that they should remember the given stimuli, the emotion-induced photographs and the neutral photographs were presented. To minimize the possibility to regulate the expressions of their mood which induced by emotional stimuli and to find out whether the antecedent-focused emotion regulation process would damaged the memory of emotional stimuli in men, participant's memory was measured directly after the presentation of each photograph by free reflection method. Also Sex differences in memories about emotional and neutral stimuli were measured and compared. Women memorized stimuli more than men, and as expected, women remembered more about the emotional stimuli than neutral ones. The analysis of sex difference about central and peripheral features indicated that women remembered central features of emotional stimulus more than those of neutral ones, but that men had no difference between central features of emotional stimuli and those of neutral ones. These results showed that men's damaged memories of emotional stimuli were caused by the antecedent-focused emotion regulation process.

  • PDF

A Multimodal Emotion Recognition Using the Facial Image and Speech Signal

  • Go, Hyoun-Joo;Kim, Yong-Tae;Chun, Myung-Geun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.1-6
    • /
    • 2005
  • In this paper, we propose an emotion recognition method using the facial images and speech signals. Six basic emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Facia] expression recognition is performed by using the multi-resolution analysis based on the discrete wavelet. Here, we obtain the feature vectors through the ICA(Independent Component Analysis). On the other hand, the emotion recognition from the speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and the final recognition is obtained from the multi-decision making scheme. After merging the facial and speech emotion recognition results, we obtained better performance than previous ones.

Emotion Recognition using Robust Speech Recognition System (강인한 음성 인식 시스템을 사용한 감정 인식)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.5
    • /
    • pp.586-591
    • /
    • 2008
  • This paper studied the emotion recognition system combined with robust speech recognition system in order to improve the performance of emotion recognition system. For this purpose, the effect of emotional variation on the speech recognition system and robust feature parameters of speech recognition system were studied using speech database containing various emotions. Final emotion recognition is processed using the input utterance and its emotional model according to the result of speech recognition. In the experiment, robust speech recognition system is HMM based speaker independent word recognizer using RASTA mel-cepstral coefficient and its derivatives and cepstral mean subtraction(CMS) as a signal bias removal. Experimental results showed that emotion recognizer combined with speech recognition system showed better performance than emotion recognizer alone.