DOI QR코드

DOI QR Code

Design of Model to Recognize Emotional States in a Speech

  • Kim Yi-Gon (Division of Electronic communication and Electrical Engineering, Chonnam National University) ;
  • Bae Young-Chul (Division of Electronic communication and Electrical Engineering, Chonnam National University)
  • Published : 2006.03.01

Abstract

Verbal communication is the most commonly used mean of communication. A spoken word carries a lot of informations about speakers and their emotional states. In this paper we designed a model to recognize emotional states in a speech, a first phase of two phases in developing a toy machine that recognizes emotional states in a speech. We conducted an experiment to extract and analyse the emotional state of a speaker in relation with speech. To analyse the signal output we referred to three characteristics of sound as vector inputs and they are the followings: frequency, intensity, and period of tones. Also we made use of eight basic emotional parameters: surprise, anger, sadness, expectancy, acceptance, joy, hate, and fear which were portrayed by five selected students. In order to facilitate the differentiation of each spectrum features, we used the wavelet transform analysis. We applied ANFIS (Adaptive Neuro Fuzzy Inference System) in designing an emotion recognition model from a speech. In our findings, inference error was about 10%. The result of our experiment reveals that about 85% of the model applied is effective and reliable.

Keywords

References

  1. Eibl-Eibesfeldt I., 'Ethology. The biology of behavior', 2nd ed. Holt, Rinehert, and Winston, New York, 1975
  2. Heerens Chr. Willem 'Sound intensity on the basilar membrane as square of the amplitude on the ear drum', 2003 (http://www.slechthored-plus.nl/fysica/en/heerens-02 expl.htm)
  3. I. R. Murray and J. L. Arnott,'Toward the simulation in synthetic speech: A review of the literature on human vocal emotion',J. Acoust. Soc. Am., Vol.93, No.2, pp1097-1109, February 1993 https://doi.org/10.1121/1.405558
  4. I. R. Murray and J. L. Arnott, 'Synthesizing Emotions in Speech: Is it time to get excited?', The MicroCenter, Applied Computer Studies Division, University of Dundee, Dundee DDi 4HN, U.K
  5. Moriyama T. ; Saito H. and Ozawa S.,'Evaluation of the relationship between emotional concepts and emotional parameters on speech', Department of Electrical Engineering, Kei Univ., Japan
  6. Phonetics and Theory of Speech Production. http://www.acoustics.hut.fi/-slemmett/dippa/chap11.html
  7. Woodman Peter, PA News, 'Human Robot science Museum Debut' Sunday 18, January 2004. 10:26 am U.K. (http://www.news.scotsman.com)