DOI QR코드

DOI QR Code

Analysis of Voice Quality Features and Their Contribution to Emotion Recognition

음성감정인식에서 음색 특성 및 영향 분석

  • Received : 2013.06.25
  • Accepted : 2013.08.08
  • Published : 2013.09.30

Abstract

This study investigates the relationship between voice quality measurements and emotional states, in addition to conventional prosodic and cepstral features. Open quotient, harmonics-to-noise ratio, spectral tilt, spectral sharpness, and band energy were analyzed as voice quality features, and prosodic features related to fundamental frequency and energy are also examined. ANOVA tests and Sequential Forward Selection are used to evaluate significance and verify performance. Classification experiments show that using the proposed features increases overall accuracy, and in particular, errors between happy and angry decrease. Results also show that adding voice quality features to conventional cepstral features leads to increase in performance.

본 연구는 감정상태와 음색특성의 관계를 확인하고, 추가로 cepstral 피쳐와 조합하여 감정인식을 진행하였다. Open quotient, harmonic-to-noise ratio, spectral tilt, spectral sharpness를 포함하는 특징들을 음색검출을 위해 적용하였고, 일반적으로 사용되는 피치와 에너지를 기반한 운율피쳐를 적용하였다. ANOVA분석을 통해 각 특징벡터의 유효성을 살펴보고, sequential forward selection 방법을 적용하여 최종 감정인식 성능을 분석하였다. 결과적으로, 제안된 피쳐들으로부터 성능이 향상되는 것을 확인하였고, 특히 화남과 기쁨에 대하여 에러가 줄어드는 것을 확인하였다. 또한 음색관련 피쳐들이 cepstral 피쳐와 결합할 경우 역시 인식 성능이 향상되었다.

Keywords

References

  1. R. Cowie, E. Douglas-Cowei, N. Tsapatsoulis, G. Votsis, S. Kollias, W. Fellenz, and J. G. Taylor, "Emotion Recognition in Human Computer Interaction," IEEE Signal Processing Magazine, pp. 32-80, 2001.
  2. B.-S. Kang, "Text independent emotion recognition using speech signals," M. S. Thesis, Yonsei university, 2000.
  3. I. Murray, J. Arnott, "Toward the simulation of emotion in synthetic speech: A review of the literature of human vocal emotion," J. Acoust. Soc. Am, vol. 93 (2), pp. 1097-1108, 1993. https://doi.org/10.1121/1.405558
  4. H.-S. Kwak, S.-H. Kim, Y.-K. Kwak, "Emotion recognition using prosodic feature vector and Gaussian mixture model," Korean Soc. for Noise and Vibration Eng, pp. 762-765, 2002.
  5. S. Yacoub, S. Simske, X. Lin, J. Burns, "Recognition of Emotionsin Interactive Voice Response System," Proceedings of the Eurospeech 2003, Geneva, 2003.
  6. J.-Y. Choi, M. Hasegawa-Johnson, J. Cole, "Finding intonational boundaries using acoustic cues related to the voice source," J. Acout. Soc. Am. vol. 118 (4), p. 2579-2587, 2005. https://doi.org/10.1121/1.2010288
  7. G. de Krom, "A Cepstrum-based technique for determining a Harmonic-to-Noise ratio in speech signals," J. Speech Hearing Res. vol. 36, pp. 254-266, 1993. https://doi.org/10.1044/jshr.3602.254
  8. P. Pudil, F. J. Ferri, J. Novovicova, J. Kittler, "Floating Search Methods for Feature Selection with Nonmonotonic Criterion Functions," Proceedings of the IEEE International Conference on Pattern Recognition, vol. 2, pp. 279-283, Jerusalem, 1994.