• Title/Summary/Keyword: Vector emotion

Search Result 106, Processing Time 0.023 seconds

Speaker-Dependent Emotion Recognition For Audio Document Indexing

  • Hung LE Xuan;QUENOT Georges;CASTELLI Eric
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.92-96
    • /
    • 2004
  • The researches of the emotions are currently great interest in speech processing as well as in human-machine interaction domain. In the recent years, more and more of researches relating to emotion synthesis or emotion recognition are developed for the different purposes. Each approach uses its methods and its various parameters measured on the speech signal. In this paper, we proposed using a short-time parameter: MFCC coefficients (Mel­Frequency Cepstrum Coefficients) and a simple but efficient classifying method: Vector Quantification (VQ) for speaker-dependent emotion recognition. Many other features: energy, pitch, zero crossing, phonetic rate, LPC... and their derivatives are also tested and combined with MFCC coefficients in order to find the best combination. The other models: GMM and HMM (Discrete and Continuous Hidden Markov Model) are studied as well in the hope that the usage of continuous distribution and the temporal behaviour of this set of features will improve the quality of emotion recognition. The maximum accuracy recognizing five different emotions exceeds $88\%$ by using only MFCC coefficients with VQ model. This is a simple but efficient approach, the result is even much better than those obtained with the same database in human evaluation by listening and judging without returning permission nor comparison between sentences [8]; And this result is positively comparable with the other approaches.

  • PDF

Emotion Classification Using EEG Spectrum Analysis and Bayesian Approach (뇌파 스펙트럼 분석과 베이지안 접근법을 이용한 정서 분류)

  • Chung, Seong Youb;Yoon, Hyun Joong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.37 no.1
    • /
    • pp.1-8
    • /
    • 2014
  • This paper proposes an emotion classifier from EEG signals based on Bayes' theorem and a machine learning using a perceptron convergence algorithm. The emotions are represented on the valence and arousal dimensions. The fast Fourier transform spectrum analysis is used to extract features from the EEG signals. To verify the proposed method, we use an open database for emotion analysis using physiological signal (DEAP) and compare it with C-SVC which is one of the support vector machines. An emotion is defined as two-level class and three-level class in both valence and arousal dimensions. For the two-level class case, the accuracy of the valence and arousal estimation is 67% and 66%, respectively. For the three-level class case, the accuracy is 53% and 51%, respectively. Compared with the best case of the C-SVC, the proposed classifier gave 4% and 8% more accurate estimations of valence and arousal for the two-level class. In estimation of three-level class, the proposed method showed a similar performance to the best case of the C-SVC.

Vision System for NN-based Emotion Recognition (신경회로망 기반 감성 인식 비젼 시스템)

  • Lee, Sang-Yun;Kim, Sung-Nam;Joo, Young-Hoon;Park, Chang-Hyun;Sim, Kwee-Bo
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2036-2038
    • /
    • 2001
  • In this paper, we propose the neural network based emotion recognition method for intelligently recognizing the human's emotion using vision system. In the proposed method, human's emotion is divided into four emotion (surprise, anger, happiness, sadness). Also, we use R,G,B(red, green, blue) color image data and the gray image data to get the highly trust rate of feature point extraction. For this, we propose an algorithm to extract four feature points (eyebrow, eye, nose, mouth) from the face image acquired by the color CCD camera and find some feature vectors from those. And then we apply back-prapagation algorithm to the secondary feature vector(position and distance among the feature points). Finally, we show the practical application possibility of the proposed method.

  • PDF

The facial expression generation of vector graphic character using the simplified principle component vector (간소화된 주성분 벡터를 이용한 벡터 그래픽 캐릭터의 얼굴표정 생성)

  • Park, Tae-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.9
    • /
    • pp.1547-1553
    • /
    • 2008
  • This paper presents a method that generates various facial expressions of vector graphic character by using the simplified principle component vector. First, we analyze principle components to the nine facial expression(astonished, delighted, etc.) redefined based on Russell's internal emotion state. From this, we find principle component vector having the biggest effect on the character's facial feature and expression and generate the facial expression by using that. Also we create natural intermediate characters and expressions by interpolating weighting values to character's feature and expression. We can save memory space considerably, and create intermediate expressions with a small computation. Hence the performance of character generation system can be considerably improved in web, mobile service and game that real time control is required.

Speech Emotion Recognition with SVM, KNN and DSVM

  • Hadhami Aouani ;Yassine Ben Ayed
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.40-48
    • /
    • 2023
  • Speech Emotions recognition has become the active research theme in speech processing and in applications based on human-machine interaction. In this work, our system is a two-stage approach, namely feature extraction and classification engine. Firstly, two sets of feature are investigated which are: the first one is extracting only 13 Mel-frequency Cepstral Coefficient (MFCC) from emotional speech samples and the second one is applying features fusions between the three features: Zero Crossing Rate (ZCR), Teager Energy Operator (TEO), and Harmonic to Noise Rate (HNR) and MFCC features. Secondly, we use two types of classification techniques which are: the Support Vector Machines (SVM) and the k-Nearest Neighbor (k-NN) to show the performance between them. Besides that, we investigate the importance of the recent advances in machine learning including the deep kernel learning. A large set of experiments are conducted on Surrey Audio-Visual Expressed Emotion (SAVEE) dataset for seven emotions. The results of our experiments showed given good accuracy compared with the previous studies.

The Design of Knowledge-Emotional Reaction Model considering Personality (개인성을 고려한 지식-감정 반응 모델의 설계)

  • Shim, Jeong-Yon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.116-122
    • /
    • 2010
  • As the importance of HCI(Human-Computer Interface) caused by dramatically developed computer technology is getting high, the requirement for the design of human friendly systems is also getting high. First of all, the personality and Emotional factor should be considered for implementing more human friendly systems. Many studies on Knowledge, Emotion and personality have been made, but the combined methods connecting these three factors is not so many investigated yet. It is known that memorizing process includes not only knowledge but also the emotion and the emotion state has much effects on the process of reasoning and decision making step. Accordingly, for implementing more human friendly efficient sophisticated intelligent system, the system considering these three factors should be modeled and designed. In this paper, knowledge-emotion reaction model was designed. Five types are defined for representing the personality and emotion reaction mechanism calculating emotion vector based on the extracted Thought threads by Type matching selection was proposed. This system is applied to the virtual memory and its emotional reactions are simulated.

A Study on The Improvement of Emotion Recognition by Gender Discrimination (성별 구분을 통한 음성 감성인식 성능 향상에 대한 연구)

  • Cho, Youn-Ho;Park, Kyu-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.4
    • /
    • pp.107-114
    • /
    • 2008
  • In this paper, we constructed a speech emotion recognition system that classifies four emotions - neutral, happy, sad, and anger from speech based on male/female gender discrimination. At first, the proposed system distinguish between male and female from a queried speech, then the system performance can be improved by using separate optimized feature vectors for each gender for the emotion classification. As a emotion feature vector, this paper adopts ZCPA(Zero Crossings with Peak Amplitudes) which is well known for its noise-robustic characteristic from the speech recognition area and the features are optimized using SFS method. For a pattern classification of emotion, k-NN and SVM classifiers are compared experimentally. From the computer simulation results, the proposed system was proven to be highly efficient for speech emotion classification about 85.3% regarding four emotion states. This might promise the use the proposed system in various applications such as call-center, humanoid robots, ubiquitous, and etc.

Emotion Recognition Using Output Data of Image and Speech (영상과 음성의 출력 데이터를 이용한 감성 인식)

  • Joo, Young-Hoon;Oh, Jae-Heung;Park, Chang-Hyun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.275-280
    • /
    • 2003
  • In this paper, we propose a method for recognizing the human s emotion using output data of image and speech. The proposed method is based on the recognition rate of image and speech. In case that we use one data of image or speech, it is hard to produce the correct result by wrong recognition. To solve this problem, we propose the new method that can reduce the result of the wrong recognition by multiplying the emotion status with the higher recognition rate by the higher weight value. To experiment the proposed method, we suggest the simple recognizing method by using image and speech. Finally, we have shown the potentialities through the expriment.

Emotion Recognition in Arabic Speech from Saudi Dialect Corpus Using Machine Learning and Deep Learning Algorithms

  • Hanaa Alamri;Hanan S. Alshanbari
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.9-16
    • /
    • 2023
  • Speech can actively elicit feelings and attitudes by using words. It is important for researchers to identify the emotional content contained in speech signals as well as the sort of emotion that resulted from the speech that was made. In this study, we studied the emotion recognition system using a database in Arabic, especially in the Saudi dialect, the database is from a YouTube channel called Telfaz11, The four emotions that were examined were anger, happiness, sadness, and neutral. In our experiments, we extracted features from audio signals, such as Mel Frequency Cepstral Coefficient (MFCC) and Zero-Crossing Rate (ZCR), then we classified emotions using many classification algorithms such as machine learning algorithms (Support Vector Machine (SVM) and K-Nearest Neighbor (KNN)) and deep learning algorithms such as (Convolution Neural Network (CNN) and Long Short-Term Memory (LSTM)). Our Experiments showed that the MFCC feature extraction method and CNN model obtained the best accuracy result with 95%, proving the effectiveness of this classification system in recognizing Arabic spoken emotions.

Engine of computational Emotion model for emotional interaction with human (인간과 감정적 상호작용을 위한 '감정 엔진')

  • Lee, Yeon Gon
    • Science of Emotion and Sensibility
    • /
    • v.15 no.4
    • /
    • pp.503-516
    • /
    • 2012
  • According to the researches of robot and software agent until now, computational emotion model is dependent on system, so it is hard task that emotion models is separated from existing systems and then recycled into new systems. Therefore, I introduce the Engine of computational Emotion model (shall hereafter appear as EE) to integrate with any robots or agents. This is the engine, ie a software for independent form from inputs and outputs, so the EE is Emotion Generation to control only generation and processing of emotions without both phases of Inputs(Perception) and Outputs(Expression). The EE can be interfaced with any inputs and outputs, and produce emotions from not only emotion itself but also personality and emotions of person. In addition, the EE can be existed in any robot or agent by a kind of software library, or be used as a separate system to communicate. In EE, emotions is the Primary Emotions, ie Joy, Surprise, Disgust, Fear, Sadness, and Anger. It is vector that consist of string and coefficient about emotion, and EE receives this vectors from input interface and then sends its to output interface. In EE, each emotions are connected to lists of emotional experiences, and the lists consisted of string and coefficient of each emotional experiences are used to generate and process emotional states. The emotional experiences are consisted of emotion vocabulary understanding various emotional experiences of human. This study EE is available to use to make interaction products to response the appropriate reaction of human emotions. The significance of the study is on development of a system to induce that person feel that product has your sympathy. Therefore, the EE can help give an efficient service of emotional sympathy to products of HRI, HCI area.

  • PDF