• Title/Summary/Keyword: Emotional speech

Search Result 185, Processing Time 0.03 seconds

Visualizing Emotions with an Artificial Emotion Model Based on Psychology -Focused on Characters in Hamlet- (심리학 기반 인공감정모델을 이용한 감정의 시각화 -햄릿의 등장인물을 중심으로-)

  • Ham, Jun-Seok;Ryeo, Ji-Hye;Ko, Il-Ju
    • Science of Emotion and Sensibility
    • /
    • v.11 no.4
    • /
    • pp.541-552
    • /
    • 2008
  • We cannot express emotions correctly with only speech because it is hard to estimate the kind, size, amount of emotions. Hamlet who is a protagonist in 'Hamlet' of Shakespeare has emotions which cannot be expressed within only speech because he is in various dramatic situations. So we supposed an artificial emotion, instead of expressing emotion with speech, expressing and visualizing current emotions with color and location. And we visualized emotions of characters in 'Hamlet' with the artificial emotion. We designed the artificial emotion to four steps considering peculiarities of emotion. First, the artificial emotion analyzes inputted emotional stimulus as relationship between causes and effects and analyzes its kinds and amounts. Second, we suppose Emotion Graph Unit to express generating, maintaining, decaying of analyzed one emotional stimuli which is outputted by first step, according to characteristic. Third, using Emotion Graph Unit, we suppose Emotion Graph that expresses continual same emotional stimulus. And we make Emotion Graph at each emotions, managing generation and decay of emotion individually. Last, we suppose Emotion Field can express current combined value of Emotion Graph according to co-relation of various emotions, and visualize current emotion by a color and a location in Emotion Field. We adjusted the artificial emotion to the play 'Hamlet' to test and visualize changes of emotion of Hamlet and his mother, Gertrude.

  • PDF

Comparison of feature parameters for emotion recognition using speech signal (음성 신호를 사용한 감정인식의 특징 파라메터 비교)

  • 김원구
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.371-377
    • /
    • 2003
  • In this paper, comparison of feature parameters for emotion recognition using speech signal is studied. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy and phonetic feature such as MFCC parameters. In order to evaluate the performance of feature parameters speaker and context independent emotion recognition system was constructed to make experiment. In the experiments, pitch, energy parameters and their derivatives were used as a prosodic information and MFCC parameters and its derivative were used as phonetic information. Experimental results using vector quantization based emotion recognition system showed that recognition system using MFCC parameter and its derivative showed better performance than that using the pitch and energy parameters.

On the Importance of Tonal Features for Speech Emotion Recognition (음성 감정인식에서의 톤 정보의 중요성 연구)

  • Lee, Jung-In;Kang, Hong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.18 no.5
    • /
    • pp.713-721
    • /
    • 2013
  • This paper describes an efficiency of chroma based tonal features for speech emotion recognition. As the tonality caused by major or minor keys affects to the perception of musical mood, so the speech tonality affects the perception of the emotional states of spoken utterances. In order to justify this assertion with respect to tonality and emotion, subjective hearing tests are carried out by using synthesized signals generated from chroma features, and consequently show that the tonality contributes especially to the perception of the negative emotion such as anger and sad. In automatic emotion recognition tests, the modified chroma-based tonal features are shown to produce noticeable improvement of accuracy when they are supplemented to the conventional log-frequency power coefficient (LFPC)-based spectral features.

Construction of Customer Appeal Classification Model Based on Speech Recognition

  • Sheng Cao;Yaling Zhang;Shengping Yan;Xiaoxuan Qi;Yuling Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.2
    • /
    • pp.258-266
    • /
    • 2023
  • Aiming at the problems of poor customer satisfaction and poor accuracy of customer classification, this paper proposes a customer classification model based on speech recognition. First, this paper analyzes the temporal data characteristics of customer demand data, identifies the influencing factors of customer demand behavior, and determines the process of feature extraction of customer voice signals. Then, the emotional association rules of customer demands are designed, and the classification model of customer demands is constructed through cluster analysis. Next, the Euclidean distance method is used to preprocess customer behavior data. The fuzzy clustering characteristics of customer demands are obtained by the fuzzy clustering method. Finally, on the basis of naive Bayesian algorithm, a customer demand classification model based on speech recognition is completed. Experimental results show that the proposed method improves the accuracy of the customer demand classification to more than 80%, and improves customer satisfaction to more than 90%. It solves the problems of poor customer satisfaction and low customer classification accuracy of the existing classification methods, which have practical application value.

What you said vs. how you said it. ('어떻게 말하느냐?' vs. '무엇을 말하느냐?')

  • Choi, Moon-Gee;Nam, Ki-Chun
    • Proceedings of the KSPS conference
    • /
    • 2006.11a
    • /
    • pp.11-13
    • /
    • 2006
  • The present paper focuses on the interaction between lexical-semantic information and affective prosody. More specifically, we explore whether affective prosody influence on evaluation of affective meaning of a word. To this end, we asked participants to listen a word and to evaluate the emotional content of the word which were recoded with affective prosody. Results showed that first, emotional evaluation was slower when the word meaning is negative than when they is positive. Second, when the prosody of words is negative, evaluation time is faster than when it is neutral or positive. And finally, when the affective meaning of word and prosody is congruent, response time is faster than it is incongruent.

  • PDF

Emotion Recognition using Short-Term Multi-Physiological Signals

  • Kang, Tae-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.1076-1094
    • /
    • 2022
  • Technology for emotion recognition is an essential part of human personality analysis. To define human personality characteristics, the existing method used the survey method. However, there are many cases where communication cannot make without considering emotions. Hence, emotional recognition technology is an essential element for communication but has also been adopted in many other fields. A person's emotions are revealed in various ways, typically including facial, speech, and biometric responses. Therefore, various methods can recognize emotions, e.g., images, voice signals, and physiological signals. Physiological signals are measured with biological sensors and analyzed to identify emotions. This study employed two sensor types. First, the existing method, the binary arousal-valence method, was subdivided into four levels to classify emotions in more detail. Then, based on the current techniques classified as High/Low, the model was further subdivided into multi-levels. Finally, signal characteristics were extracted using a 1-D Convolution Neural Network (CNN) and classified sixteen feelings. Although CNN was used to learn images in 2D, sensor data in 1D was used as the input in this paper. Finally, the proposed emotional recognition system was evaluated by measuring actual sensors.

Change in acoustic characteristics of voice quality and speech fluency with aging (노화에 따른 음질과 구어 유창성의 음향학적 특성 변화)

  • Hee-June Park;Jin Park
    • Phonetics and Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.45-51
    • /
    • 2023
  • Voice issues such as voice weakness that arise with age can have social and emotional impacts, potentially leading to feelings of isolation and depression. This study aimed to investigate the changes in acoustic characteristics resulting from aging, focusing on voice quality and spoken fluency. To this end, tasks involving sustained vowel phonation and paragraph reading were recorded for 20 elderly and 20 young participants. Voice-quality-related variables, including F0, jitter, shimmer, and Cepstral Peak Prominence (CPP) values, were analyzed along with speech-fluency-related variables, such as average syllable duration (ASD), articulation rate (AR), and speech rate (SR). The results showed that in voice quality-related measurements, F0 was higher for the elderly and voice quality was diminished, as indicated by increased jitter, shimmer, and lower CPP levels. Speech fluency analysis also demonstrated that the elderly spoke more slowly, as indicated by all ASD, AR, and SR measurements. Correlation analysis between voice quality and speech fluency showed a significant relationship between shimmer and CPP values and between ASD and SR values. This suggests that changes in spoken fluency can be identified early by measuring the variations in voice quality. This study further highlights the reciprocal relationship between voice quality and spoken fluency, emphasizing that deterioration in one can affect the other.

School Adaptation Program for School-Age Children with Emotional and Behavioral Problems (정서행동문제를 가진 학령기 아동을 위한 학교적응 프로그램 개발 및 평가)

  • Cho, Haeryun;Kim, Shin-Jeong;Kwon, Myung Soon;Oh, Jina;Han, Woojae
    • Child Health Nursing Research
    • /
    • v.21 no.2
    • /
    • pp.141-150
    • /
    • 2015
  • Purpose: The purpose of this study was to develop and evaluate a school adaptation program (SAP) for school-age children with emotional and behavioral problems who attended public elementary schools. Methods: SAP, developed by the authors, addresses school adaptation and academic efficacy and consists of 10 sessions based on five categories (i.e., school life, classroom activity, relationship with friends, relationship with teacher, and school environment). Sixteen children with emotional and behavior problems answered questionnaires before and after participation in the program. Results: The results showed that there was a significant difference between pre and post-test on school adaptation (t=-2.78, p=.015) and academic efficacy (t=-4.62, p<.001) after the 10 sessions of SAP. Conclusion: The results indicate that SAP can could serve as a practical program for school nurses and teachers. Further studies based on SAP in various school settings are recommended.

Emotional Speech Synthesis using the Emotion Editor Program (감정 편집기를 이용한 감정 음성 합성)

  • Chun Heejin;Lee Yanghee
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.79-82
    • /
    • 2000
  • 감정 표현 음성을 합성하기 위하여 본 연구에서는 감정 음성 데이터의 피치와 지속시간의 음절 유형별 및 어절 내 음절 위치에 따른 변화를 분석하였고, 스펙트럼 포락이 감정 변화에 어떤 영향을 미치는지를 분석하였다. 그 결과, 피치와 지속시간의 음절 유형별, 어절 내 음절 위치에 따른 변화와, 스펙트럼 포락 등도 감정 변화에 영향을 미치는 것으로 나타났다. 또한, 감정 음성의 음향학적 분석 결과를 적용하여 감정 음성을 합성하고 평가하기 위하여, 평상 음성의 음운 및 운율 파라미터 (피치, 에너지, 지속시간, 스펙트럼 포락)를 조절함으로써 감정 음성을 생성하는 감정 편집기를 구현하였다.

  • PDF

A study about the aspect of translation on 'Hu(怖)' in novel 『Kokoro』 - Focusing on novels translated in Korean and English - (소설 『こころ』에 나타난 감정표현 '포(怖)'에 관한 번역 양상 - 한국어 번역 작품과 영어 번역 작품을 중심으로 -)

  • Yang, Jung-soon
    • Cross-Cultural Studies
    • /
    • v.53
    • /
    • pp.131-161
    • /
    • 2018
  • Emotional expressions are expressions that show the internal condition of mind or consciousness. Types of emotional expressions include vocabulary that describes emotion, the composition of sentences that expresses emotion such as an exclamatory sentence and rhetorical question, expressions of interjection, appellation, causative, passive, adverbs of attitude for an idea, and a style of writing. This study focuses on vocabulary that describes emotion and analyzes the aspect of translation when emotional expressions of 'Hu(怖)' is shown on "Kokoro". The aspect of translation was analyzed by three categories as follows; a part of speech, handling of subjects, and classification of meanings. As a result, the aspect of translation for expressions of Hu(怖)' showed that they were translated to vocabulary as they were suggested in the dictionary in some cases. However, they were not always translated as they were suggested in the dictionary. Vocabulary that described the emotion of 'Hu(怖)' in Japanese sentences were mostly translated to their corresponding parts of speech in Korean. Some adverbs needed to add 'verbs' when they were translated. Also, different vocabulary was added or used to maximize emotion. However, the correspondence of a part of speech in English was different from Korean. Examples of Japanese sentences that expressed 'Hu(怖)' by verbs were translated to expression of participles for passive verbs such as 'fear', 'dread', 'worry', and 'terrify' in many cases. Also, idioms were translated with focus on the function of sentences rather than the form of sentences. Examples, what was expressed in adverbs did not accompany verbs of 'Hu (怖)'. Instead, it was translated to the expression of participles for passive verbs and adjectives such as 'dread', 'worry', and 'terrify' in many cases. The main agents of emotion were shown in the first person and the third person in simple sentences. The translation on emotional expressions when a main agent was the first person showed that the fundamental word order of Japanese was translated as it was in Korean. However, adverbs of time and adverbs of degree tended to be added. Also, the first person as the main agent of emotion was positioned at the place of subject when it was translated in English. However, things or the cause of events were positioned at the place of subject in some cases to show the degree of 'Hu(怖)' which the main agent experienced. The expression of conjecture and supposition or a certain visual and auditory basis was added to translate the expression of emotion when the main agent of emotion was the third person. Simple sentences without a main agent of emotion showed that their subjects could be omitted even if they were essential components because they could be known through context in Korean. These omitted subjects were found and translated in English. Those subjects were not necessarily humans who were the main agents of emotion. They could be things or causes of events that specified the expression of emotion.