• Title/Summary/Keyword: Emotion-Recognition

Search Result 651, Processing Time 0.028 seconds

Emotion Recognition Using Tone and Tempo Based on Voice for IoT (IoT를 위한 음성신호 기반의 톤, 템포 특징벡터를 이용한 감정인식)

  • Byun, Sung-Woo;Lee, Seok-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.1
    • /
    • pp.116-121
    • /
    • 2016
  • In Internet of things (IoT) area, researches on recognizing human emotion are increasing recently. Generally, multi-modal features like facial images, bio-signals and voice signals are used for the emotion recognition. Among the multi-modal features, voice signals are the most convenient for acquisition. This paper proposes an emotion recognition method using tone and tempo based on voice. For this, we make voice databases from broadcasting media contents. Emotion recognition tests are carried out by extracted tone and tempo features from the voice databases. The result shows noticeable improvement of accuracy in comparison to conventional methods using only pitch.

Emotion Recognition of Facial Expression using the Hybrid Feature Extraction (혼합형 특징점 추출을 이용한 얼굴 표정의 감성 인식)

  • Byun, Kwang-Sub;Park, Chang-Hyun;Sim, Kwee-Bo
    • Proceedings of the KIEE Conference
    • /
    • 2004.05a
    • /
    • pp.132-134
    • /
    • 2004
  • Emotion recognition between human and human is done compositely using various features that are face, voice, gesture and etc. Among them, it is a face that emotion expression is revealed the most definitely. Human expresses and recognizes a emotion using complex and various features of the face. This paper proposes hybrid feature extraction for emotions recognition from facial expression. Hybrid feature extraction imitates emotion recognition system of human by combination of geometrical feature based extraction and color distributed histogram. That is, it can robustly perform emotion recognition by extracting many features of facial expression.

  • PDF

A Comparison of Effective Feature Vectors for Speech Emotion Recognition (음성신호기반의 감정인식의 특징 벡터 비교)

  • Shin, Bo-Ra;Lee, Soek-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.10
    • /
    • pp.1364-1369
    • /
    • 2018
  • Speech emotion recognition, which aims to classify speaker's emotional states through speech signals, is one of the essential tasks for making Human-machine interaction (HMI) more natural and realistic. Voice expressions are one of the main information channels in interpersonal communication. However, existing speech emotion recognition technology has not achieved satisfactory performances, probably because of the lack of effective emotion-related features. This paper provides a survey on various features used for speech emotional recognition and discusses which features or which combinations of the features are valuable and meaningful for the emotional recognition classification. The main aim of this paper is to discuss and compare various approaches used for feature extraction and to propose a basis for extracting useful features in order to improve SER performance.

Speech Emotion Recognition Using Confidence Level for Emotional Interaction Robot (감정 상호작용 로봇을 위한 신뢰도 평가를 이용한 화자독립 감정인식)

  • Kim, Eun-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.755-759
    • /
    • 2009
  • The ability to recognize human emotion is one of the hallmarks of human-robot interaction. Especially, speaker-independent emotion recognition is a challenging issue for commercial use of speech emotion recognition systems. In general, speaker-independent systems show a lower accuracy rate compared with speaker-dependent systems, as emotional feature values depend on the speaker and his/her gender. Hence, this paper describes the realization of speaker-independent emotion recognition by rejection using confidence measure to make the emotion recognition system be homogeneous and accurate. From comparison of the proposed methods with conventional method, the improvement and effectiveness of proposed methods were clearly confirmed.

Hybrid-Feature Extraction for the Facial Emotion Recognition

  • Byun, Kwang-Sub;Park, Chang-Hyun;Sim, Kwee-Bo;Jeong, In-Cheol;Ham, Ho-Sang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1281-1285
    • /
    • 2004
  • There are numerous emotions in the human world. Human expresses and recognizes their emotion using various channels. The example is an eye, nose and mouse. Particularly, in the emotion recognition from facial expression they can perform the very flexible and robust emotion recognition because of utilization of various channels. Hybrid-feature extraction algorithm is based on this human process. It uses the geometrical feature extraction and the color distributed histogram. And then, through the independently parallel learning of the neural-network, input emotion is classified. Also, for the natural classification of the emotion, advancing two-dimensional emotion space is introduced and used in this paper. Advancing twodimensional emotion space performs a flexible and smooth classification of emotion.

  • PDF

Weighted Soft Voting Classification for Emotion Recognition from Facial Expressions on Image Sequences (이미지 시퀀스 얼굴표정 기반 감정인식을 위한 가중 소프트 투표 분류 방법)

  • Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1175-1186
    • /
    • 2017
  • Human emotion recognition is one of the promising applications in the era of artificial super intelligence. Thus far, facial expression traits are considered to be the most widely used information cues for realizing automated emotion recognition. This paper proposes a novel facial expression recognition (FER) method that works well for recognizing emotion from image sequences. To this end, we develop the so-called weighted soft voting classification (WSVC) algorithm. In the proposed WSVC, a number of classifiers are first constructed using different and multiple feature representations. In next, multiple classifiers are used for generating the recognition result (namely, soft voting) of each face image within a face sequence, yielding multiple soft voting outputs. Finally, these soft voting outputs are combined through using a weighted combination to decide the emotion class (e.g., anger) of a given face sequence. The weights for combination are effectively determined by measuring the quality of each face image, namely "peak expression intensity" and "frontal-pose degree". To test the proposed WSVC, CK+ FER database was used to perform extensive and comparative experimentations. The feasibility of our WSVC algorithm has been successfully demonstrated by comparing recently developed FER algorithms.

Emotion Recognition using Pitch Parameters of Speech (음성의 피치 파라메터를 사용한 감정 인식)

  • Lee, Guehyun;Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.3
    • /
    • pp.272-278
    • /
    • 2015
  • This paper studied various parameter extraction methods using pitch information of speech for the development of the emotion recognition system. For this purpose, pitch parameters were extracted from korean speech database containing various emotions using stochastical information and numerical analysis techniques. GMM based emotion recognition system were used to compare the performance of pitch parameters. Sequential feature selection method were used to select the parameters showing the best emotion recognition performance. Experimental results of recognizing four emotions showed 63.5% recognition rate using the combination of 15 parameters out of 56 pitch parameters. Experimental results of detecting the presence of emotion showed 80.3% recognition rate using the combination of 14 parameters.

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

Classification of Three Different Emotion by Physiological Parameters

  • Jang, Eun-Hye;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.271-279
    • /
    • 2012
  • Objective: This study classified three different emotional states(boredom, pain, and surprise) using physiological signals. Background: Emotion recognition studies have tried to recognize human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 122 college students participated in this experiment. Three different emotional stimuli were presented to participants and physiological signals, i.e., EDA(Electrodermal Activity), SKT(Skin Temperature), PPG(Photoplethysmogram), and ECG (Electrocardiogram) were measured for 1 minute as baseline and for 1~1.5 minutes during emotional state. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 features were extracted from these signals. Statistical analysis for emotion classification were done by DFA(discriminant function analysis) (SPSS 15.0) by using the difference values subtracting baseline values from the emotional state. Results: The result showed that physiological responses during emotional states were significantly differed as compared to during baseline. Also, an accuracy rate of emotion classification was 84.7%. Conclusion: Our study have identified that emotions were classified by various physiological signals. However, future study is needed to obtain additional signals from other modalities such as facial expression, face temperature, or voice to improve classification rate and to examine the stability and reliability of this result compare with accuracy of emotion classification using other algorithms. Application: This could help emotion recognition studies lead to better chance to recognize various human emotions by using physiological signals as well as is able to be applied on human-computer interaction system for emotion recognition. Also, it can be useful in developing an emotion theory, or profiling emotion-specific physiological responses as well as establishing the basis for emotion recognition system in human-computer interaction.

Multimodal Parametric Fusion for Emotion Recognition

  • Kim, Jonghwa
    • International journal of advanced smart convergence
    • /
    • v.9 no.1
    • /
    • pp.193-201
    • /
    • 2020
  • The main objective of this study is to investigate the impact of additional modalities on the performance of emotion recognition using speech, facial expression and physiological measurements. In order to compare different approaches, we designed a feature-based recognition system as a benchmark which carries out linear supervised classification followed by the leave-one-out cross-validation. For the classification of four emotions, it turned out that bimodal fusion in our experiment improves recognition accuracy of unimodal approach, while the performance of trimodal fusion varies strongly depending on the individual. Furthermore, we experienced extremely high disparity between single class recognition rates, while we could not observe a best performing single modality in our experiment. Based on these observations, we developed a novel fusion method, called parametric decision fusion (PDF), which lies in building emotion-specific classifiers and exploits advantage of a parametrized decision process. By using the PDF scheme we achieved 16% improvement in accuracy of subject-dependent recognition and 10% for subject-independent recognition compared to the best unimodal results.