DOI QR코드

DOI QR Code

Multimodal Emotional State Estimation Model for Implementation of Intelligent Exhibition Services

지능형 전시 서비스 구현을 위한 멀티모달 감정 상태 추정 모형

  • Lee, Kichun (Department of Industrial Engineering, Hanyang University) ;
  • Choi, So Yun (Graduate School of Business IT, Kookmin University) ;
  • Kim, Jae Kyeong (Graduate School of Business Administration, Kyung Hee University) ;
  • Ahn, Hyunchul (School of MIS, Kookmin University)
  • 이기천 (한양대학교 공과대학 산업공학과) ;
  • 최소윤 (국민대학교 비즈니스IT전문대학원) ;
  • 김재경 (경희대학교 경영대학) ;
  • 안현철 (국민대학교 경영대학 경영정보학부)
  • Received : 2014.01.29
  • Accepted : 2014.02.18
  • Published : 2014.03.28

Abstract

Both researchers and practitioners are showing an increased interested in interactive exhibition services. Interactive exhibition services are designed to directly respond to visitor responses in real time, so as to fully engage visitors' interest and enhance their satisfaction. In order to install an effective interactive exhibition service, it is essential to adopt intelligent technologies that enable accurate estimation of a visitor's emotional state from responses to exhibited stimulus. Studies undertaken so far have attempted to estimate the human emotional state, most of them doing so by gauging either facial expressions or audio responses. However, the most recent research suggests that, a multimodal approach that uses people's multiple responses simultaneously may lead to better estimation. Given this context, we propose a new multimodal emotional state estimation model that uses various responses including facial expressions, gestures, and movements measured by the Microsoft Kinect Sensor. In order to effectively handle a large amount of sensory data, we propose to use stratified sampling-based MRA (multiple regression analysis) as our estimation method. To validate the usefulness of the proposed model, we collected 602,599 responses and emotional state data with 274 variables from 15 people. When we applied our model to the data set, we found that our model estimated the levels of valence and arousal in the 10~15% error range. Since our proposed model is simple and stable, we expect that it will be applied not only in intelligent exhibition services, but also in other areas such as e-learning and personalized advertising.

최근 관람객의 반응에 따라 실시간으로 대응하여 관객의 몰입과 만족도를 증대시키는 인터랙티브 전시 서비스에 대한 학계와 산업계의 관심이 높아지고 있다. 이러한 인터랙티브 전시 서비스를 효과적으로 구현하기 위해서는 관객의 반응을 통해 해당 관객이 느끼는 감정 상태를 추정할 수 있는 지능형 기술의 도입이 요구된다. 인간의 감정 상태를 추정하기 위한 시도들은 많은 연구들에서 이루어져 왔고, 그 중 대부분은 사람의 얼굴 표정이나 소리 반응을 통해 감정 상태를 추정하는 방식을 도입하고 있다. 하지만, 최근 소개되고 있는 연구들에 따르면 단일 반응이 아닌 여러 반응을 종합적으로 고려하는 이른바 멀티 모달(multimodal) 접근을 사용했을 경우, 인간의 감정 상태를 보다 정확하게 추정할 수 있다. 이러한 배경에서 본 연구는 키넥트 센서를 통해 측정되는 관객의 얼굴 표정, 몸짓, 움직임 등을 종합적으로 고려한 새로운 멀티모달 감정 상태 추정 모형을 제안하고 있다. 제안모형의 예측 기법으로는 방대한 양의 데이터를 효과적으로 처리하기 위해, 몬테칼로(Monte Carlo) 방법인 계층화 샘플링(stratified sampling) 방법에 기반한 다중회귀분석을 적용하였다. 제안 모형의 성능을 검증하기 위해, 15명의 피실험자로부터 274개의 독립 및 종속변수들로 구성된 602,599건의 관측 데이터를 수집하여 여기에 제안 모형을 적용해 보았다. 그 결과 10~15% 이내의 평균오차 범위 내에서 피실험자의 쾌/불쾌도(valence) 및 각성도(arousal) 상태를 정확하게 추정할 수 있음을 확인할 수 있었다. 이러한 본 연구의 제안 모형은 비교적 구현이 간단하면서도 안정성이 높아, 향후 지능형 전시 서비스 및 기타 원격학습이나 광고 분야 등에 효과적으로 활용될 수 있을 것으로 기대된다.

Keywords

References

  1. Ahn, H., "GA-optimized Support Vector Regression for Better Emotional State Estimation Model," Proceedings of the 5th International Conference on Internet (ICONI 2013), (2013), 333-335.
  2. Box, G. E. P. and. D. R. Cox, "An analysis of transformations," Journal of the Royal Statistical Society, Series B, Vol. 26, No. 2(1964), 211-252.
  3. Cowie, R., E. Douglas-Cowie, S. Savvidou, E. McMahon, M. Sawey, and M. Schroder, "Feeltrace: An instrument for recording perceived emotion in real time," Proceedings of ISCA Workshop on Speech and Emotion, (2000), 19-24.
  4. Ekman, P., and W. Friesen, Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, 1978.
  5. Hussain, A., E. Cambria, T. Mazzocco, M. Grassi, Q.-F. Wang, and T. Durrani, "Towards IMACA: Intelligent Multimodal Affective Conversational Agent," Lecture Notes in Computer Science, Vol. 7663(2012), 656-663.
  6. Jung, M.-K., and J.-K. Kim, "The Intelligent Determination Model of Audience Emotion for Implementing Personalized Exhibition,"Journal of Intelligence and Information Systems, Vol. 18, No. 1(2012), 39-57.
  7. Jung, M. K., J. K. Kim, Y. Ryu, and H. Ahn, "An intelligent determination model of user's emotion using sequential neural network," Proceedings of International Conference on Marketing Studies (ICMS 2012), (2012).
  8. Kim, C. K., Studies of Exhibit's Theory and Method, National Science Museum, Academic Series 12, 1996.
  9. Kim, S., E. Ryoo, M. K. Jung, J. K. . Kim, and H. Ahn, "Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model," Journal of Intelligence and Information Systems, Vol. 18, No. 3(2012), 185-202.
  10. Ko, K.-E., D.-J. Shin, and K.-B. Sim, "Development of context awareness and service reasoning technique for handicapped people," Journal of Korean Institute of Intelligent Systems, Vol. 18, No. 4(2008), 512-517. https://doi.org/10.5391/JKIIS.2008.18.4.512
  11. Lange, K., Numerical Analysis for Statisticians, Springer, 2010.
  12. Nicolaou, M., H. Gunes, and M. Pantic, "Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence-Arousal Space," IEEE Transactions on Affective Computing, Vol. 2, No. 2(2011), 92-105. https://doi.org/10.1109/T-AFFC.2011.9
  13. Nicolle, J., V. Rapp, K. Bailly. L. Prevost, and M. Chetouani, "Robust Continuous Prediction of Human Emotions using Multiscale Dynamic Cues," Proceedings of the 14th ACM International Conference on Multimodal Interaction (ICMI '12), (2012), 501-508.
  14. Quesenberry, C. P. and N. P. Jewell, "Regression analysis based on stratified samples," Biometrika, Vol. 73, No. 3(1986), 605-614. https://doi.org/10.1093/biomet/73.3.605
  15. Russell, J. A. "A Circumplex Model of Affect," Journal of Personality and Social Psychology, Vol. 39, No. 6(1980), 1161-1178. https://doi.org/10.1037/h0077714
  16. Ryoo, E. C., H. Ahn, and J. K. Kim, "The Audience Behavior-based Emotion Prediction Model for Personalized Service," Journal of Intelligence and Information Systems, Vol. 19, No. 2(2013), 73-85.
  17. Sayre, S., "Multimedia that matters: Gallery-based technology and the museum visitor," First Monday, Vol. 10, No. 5(2005), doi:10. 5210/fm.v10i5.1244. https://doi.org/10.5210/fm.v10i5.1244
  18. Wikipedia, Kinect, 2014. Available at http://en.wikipedia.org/wiki/Kinect (Accessed on January 15, 2014).
  19. Wollmer, M., A. Metallinou, F. Eyben, B. Schuller, and S. Narayanan, "Context-Sensitive Multimodal Emotion Recognition from Speech and Facial Expression using Bidirectional LSTM Modeling," Proceedings of INTERSPEECH 2010, (2010), 2362-2365.
  20. Wollmer, M., M. Kaiser, F. Eyben, F. Weninger, B. Schuller, and G. Rigoll, "Fully Automatic Audiovisual Emotion Recognition: Voice, Words, and the Face," Proceedings of 10. ITG Symposium, Speech Communication, (2012), 1-4.