• Title/Summary/Keyword: emotion technology

Search Result 802, Processing Time 0.026 seconds

A Comparison of Effective Feature Vectors for Speech Emotion Recognition (음성신호기반의 감정인식의 특징 벡터 비교)

  • Shin, Bo-Ra;Lee, Soek-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.10
    • /
    • pp.1364-1369
    • /
    • 2018
  • Speech emotion recognition, which aims to classify speaker's emotional states through speech signals, is one of the essential tasks for making Human-machine interaction (HMI) more natural and realistic. Voice expressions are one of the main information channels in interpersonal communication. However, existing speech emotion recognition technology has not achieved satisfactory performances, probably because of the lack of effective emotion-related features. This paper provides a survey on various features used for speech emotional recognition and discusses which features or which combinations of the features are valuable and meaningful for the emotional recognition classification. The main aim of this paper is to discuss and compare various approaches used for feature extraction and to propose a basis for extracting useful features in order to improve SER performance.

Speech Emotion Recognition on a Simulated Intelligent Robot (모의 지능로봇에서의 음성 감정인식)

  • Jang Kwang-Dong;Kim Nam;Kwon Oh-Wook
    • MALSORI
    • /
    • no.56
    • /
    • pp.173-183
    • /
    • 2005
  • We propose a speech emotion recognition method for affective human-robot interface. In the Proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes Pitch, jitter, duration, and rate of speech. Finally a pattern classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5 different directions. Experimental results show that the proposed method yields $48\%$ classification accuracy while human classifiers give $71\%$ accuracy.

  • PDF

1/f-LIKE FREQUENCY FLUCTUATION IN FRONTAL ALPHA WAVE AS AN INDICATOR OF EMOTION

  • Yoshida, Tomoyuki
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.99-103
    • /
    • 2000
  • There are two approaches in the study of emotion in the physiological psychology. The first is to clarify the brain mechanism of emotion, and the second is to evaluate objectively emotions using physiological responses along with our feeling experience. The method presented here belongs to the second one. Our method is based on the "level-crossing point detection" method. which involves the analysis of frequency fluctuations of EEG and is characterized by estimation of emotionality using coefficients of slopes in the log-power spectra of frequency fluctuation in alpha waves on both the left and right frontal lobe. In this paper we introduce a new theory of estimation on an individual's emotional state by using our non-invasive and easy measurement apparatus.

  • PDF

Music Emotion Classification Based On Three-Level Structure (3 레벨 구조 기반의 음악 무드분류)

  • Kim, Hyoung-Gook;Jeong, Jin-Guk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.2E
    • /
    • pp.56-62
    • /
    • 2007
  • This paper presents the automatic music emotion classification on acoustic data. A three-level structure is developed. The low-level extracts the timbre and rhythm features. The middle-level estimates the indication functions that represent the emotion probability of a single analysis unit. The high-level predicts the emotion result based on the indication function values. Experiments are carried out on 695 homogeneous music pieces labeled with four emotions, including pleasant, calm, sad, and excited. Three machine learning methods, GMM, MLP, and SVM, are compared on the high-level. The best result of 90.16% is obtained by MLP method.

A study on the arrangement of emotional words for understanding the human's emotion

  • 권규식;이순요;우석찬
    • Proceedings of the ESK Conference
    • /
    • 1993.04a
    • /
    • pp.64-68
    • /
    • 1993
  • The idia of modern product design is translated from the concept of functional importance as the basic function to that of emotional importance as the supplement function. In other words, the interests of the emotion in human performance side based on psychological factors of human are increased as well as the function in technical performance side based on physical factors of product. The standard emotional works for understanding the human's emotion are arranged in this paper. The standard emotional words are composed of words expressing the humaa's emotion. The adjectives adaptable to human's emotional works are collected from Korean dictionaries and arranged in the semantic differential(SD) scale. Next, the words with great marks evaluated by SD method are analyzed by factor analysis(FA) method and characterized as emotional words for understanding the human's emotion. The standard emotional words arranged in this paper are important because they are basic information for the development of product or technology as well as for the matter of emotional measurement technical development.

  • PDF

Speech Emotion Recognition by Speech Signals on a Simulated Intelligent Robot (모의 지능로봇에서 음성신호에 의한 감정인식)

  • Jang, Kwang-Dong;Kwon, Oh-Wook
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.163-166
    • /
    • 2005
  • We propose a speech emotion recognition method for natural human-robot interface. In the proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes pitch, jitter, duration, and rate of speech. Finally a patten classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5different directions. Experimental results show that the proposed method yields 59% classification accuracy while human classifiers give about 50%accuracy, which confirms that the proposed method achieves performance comparable to a human.

  • PDF

A Study on Digital Clothing Design by Characteristics of Ubiquitous Environment (유비쿼터스 환경 특성에 의한 디지털 의류 디자인에 관한 연구)

  • Kim, Ji-Eon
    • Journal of the Korean Society of Costume
    • /
    • v.57 no.3 s.112
    • /
    • pp.23-36
    • /
    • 2007
  • It is important that ubiquitous technology changes paradigm of thought, not simple definition in the 21st digital era. Characteristics of ubiquitous computing are pervasive, disappearing, invisible, calm through environment. As IT Technology develops, designers, computer scientists, chemists, performance artists cooperate in order to find out the best way to make desirable digital clothing in the future, with the merit of each part. Digital clothing defines clothes of new generation equipped computer, digital installations. Digital clothing design demands intercept of electromagnetic waves, light-weight and esthetic appearance, for it is attached high-technology equipment near body. The purpose of this study is to analyze design features of digital clothing according to ubiquitous characteristics. The methods of this study are documentary research of previous study and case study. In the theoretical study, ubiquitous characteristics are function-intensive by convergence, interactivity, embedded mobility and human & emotion-oriented attributes. Based on ubiquitous characteristics, digital clothing design classified function-intensive design by convergence, design for Interactivity and multi-sensible & emotion-oriented design, because embedded mobility is a basic element of ubiquitous environment. The early days digital clothing design is function-intensive design, and have esthetic appearances and design for interactivity increasingly. Recently digital clothing design is expressed multi-sensible and emotion-oriented design.

The study on emotion recognition by time-dependent parameters of autonomic nervous response (TDP(time-dependent parameters)를 적용하여 분석한 자율신경계 반응에 의한 감성인식에 대한 연구)

  • Kim, Jong-Hwa;Whang, Min-Cheol;Kim, Young-Joo;Woo, Jin-Cheol
    • Science of Emotion and Sensibility
    • /
    • v.11 no.4
    • /
    • pp.637-644
    • /
    • 2008
  • Human emotion has been tried to be recognized by physiological measurements in developing emotion machine enabling to understand and react to user's emotion. This study is to find the time-dependent physiological measurements and their variation characteristics for discriminating emotions according to dimensional emotion model. Ten university students were asked to watch sixteen prepared images to evoke different emotions. Their subjective emotions and autonomic nervous responses such as ECG (electrocardiogram), PPG (photoplethysmogram), GSR (Galvanic skin response), RSP (respiration), and SKT(skin temperature) were measured during experiment. And these responses were analyzed into HR(Heart Rate), Respiration Rate, GSR amplitude average, SKT amplitude average, PPG amplitude, and PTT(Pulse Transition Time). TDPs(Time dependent parameters) defined as the delay, the activation, the half recovery and the full recovery of respective physiological signal in this study have been determined and statistically compared between variations from different emotions. The significant tendencies in TDP were shown between emotions. Therefore, TDP may provide useful measurements with emotion recognition.

  • PDF

Emotion Recognition Method for Driver Services

  • Kim, Ho-Duck;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.4
    • /
    • pp.256-261
    • /
    • 2007
  • Electroencephalographic(EEG) is used to record activities of human brain in the area of psychology for many years. As technology developed, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study Emotion Recognition method which uses one of EEG signals and Gestures in the existing research. In this paper, we use together EEG signals and Gestures for Emotion Recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both EEG signals and gestures gets high recognition rates better than using EEG signals or gestures. Both EEG signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on the reinforcement learning.

Effects of LED on Emotion-Like Feedback of a Single-Eyed Spherical Robot

  • Onchi, Eiji;Cornet, Natanya;Lee, SeungHee
    • Science of Emotion and Sensibility
    • /
    • v.24 no.3
    • /
    • pp.115-124
    • /
    • 2021
  • Non-verbal communication is important in human interaction. It provides a layer of information that complements the message being transmitted. This type of information is not limited to human speakers. In human-robot communication, increasing the animacy of the robotic agent-by using non-verbal cues-can aid the expression of abstract concepts such as emotions. Considering the physical limitations of artificial agents, robots can use light and movement to express equivalent emotional feedback. This study analyzes the effects of LED and motion animation of a spherical robot on the emotion being expressed by the robot. A within-subjects experiment was conducted at the University of Tsukuba where participants were asked to rate 28 video samples of a robot interacting with a person. The robot displayed different motions with and without light animations. The results indicated that adding LED animations changes the emotional impression of the robot for valence, arousal, and dominance dimensions. Furthermore, people associated various situations according to the robot's behavior. These stimuli can be used to modulate the intensity of the emotion being expressed and enhance the interaction experience. This paper facilitates the possibility of designing more affective robots in the future, using simple feedback.