DOI QR코드

DOI QR Code

A Multimodal Emotion Recognition Using the Facial Image and Speech Signal

  • Go, Hyoun-Joo (Chungbuk National University, School of Electrical and Computer Engineering Research Institute for computer and Information Communication) ;
  • Kim, Yong-Tae (Hankyong National University Department of Information and Control Engineering) ;
  • Chun, Myung-Geun (Chungbuk National University, School of Electrical and Computer Engineering Research Institute for computer and Information Communication)
  • Published : 2005.03.01

Abstract

In this paper, we propose an emotion recognition method using the facial images and speech signals. Six basic emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Facia] expression recognition is performed by using the multi-resolution analysis based on the discrete wavelet. Here, we obtain the feature vectors through the ICA(Independent Component Analysis). On the other hand, the emotion recognition from the speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and the final recognition is obtained from the multi-decision making scheme. After merging the facial and speech emotion recognition results, we obtained better performance than previous ones.

References

  1. J. Lien, T. Kanade, C. Li, 'Detection, tracking, and classification of action units in facial expression', Journal of Robotics and Autonomous Systems, Vol. 31, No.3, pp. 131-146, 2000 https://doi.org/10.1016/S0921-8890(99)00103-7
  2. M. Turk, A. Pentland, 'Eigenfaces for recognition' Journal of Cognitive Neuroscience, Vol. 3, No.1, pp.71-86, 1991 https://doi.org/10.1162/jocn.1991.3.1.71
  3. P. Penev, J. Atick, 'Local feature analysis: a general statistical theory for object representation', Network : Computation in Neural Systems, Vol. 7, pp. 477-500, 1996 https://doi.org/10.1088/0954-898X/7/3/002
  4. P. Belhumeur, J. Hespanha, D. Kriegman, 'Eigenfaces vs. fisherfaces: Recognition using class specific linear projection', IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 19, No.7, pp. 711-720, 1997 https://doi.org/10.1109/34.598228
  5. Brown, G. and Satoshi, Y. and Luebben, H. and Sejnowski, T.J., Separation of optically recorded action potential trains in tritonia by ICA, In: Proc. 5th Annual Joint Symposium on Neural Computation, 1998
  6. Marian Stewart Bartlett, Javier R. Movellan, 'Face Recognition by Independent component Analysis' IEEE transactions on neural networks, Vol 13
  7. V.Kostov and S.Fukuda, Emotion in User Interface, Voice Interaction System, IEEE Inti Conf. on Systems, Man, Cybernetics Representation, no. 2, pp. 798-803, 2000 https://doi.org/10.1109/ICSMC.2000.885947
  8. T. Moriyama and S. Oazwa, Emotion Recognition and Synthisis System on Speech IEEE IntI. Conference on Multimedea Computing and Systems, pages 840-844, 1999 https://doi.org/10.1109/MMCS.1999.779310
  9. L.C. Silva and P.C. Ng, Bimodal Emotion Recognition, Proceeding of the 4th International Conference on Automatic Face and Gesture Recognition, pp. 332-335, 2000 https://doi.org/10.1109/AFGR.2000.840655
  10. P.Ekman and W.V. Friesen. Emotion in the human face System. Cambridge University Press, San Francisco, CA, second edition, 1982
  11. Burrus, Gopinath, Guo, 'Introduction to Wavelets and Wavelet Transforms A Primer' Prentice-Hall International, Inc, 1998
  12. Gianluca Donato, Marian Stewart Bartlett, Joseph C. Hager 'Classifying Facial Actions' IEEE transactions on pattern analysis and machine intelligence, Vol. 21, No 10, 1999
  13. Roger Jang, Chuen-Tsai Sun, Neuro-fuzzy and Soft computing. Prentice-Hall International, 1997

Cited by

  1. Multimodal Biometric Recognition System using Real Fuzzy Vault vol.23, pp.4, 2013, https://doi.org/10.5391/JKIIS.2013.23.4.310