DOI QR코드

DOI QR Code

A Study on Flow-emotion-state for Analyzing Flow-situation of Video Content Viewers

영상콘텐츠 시청자의 몰입상황 분석을 위한 몰입감정상태 연구

  • Kim, Seunghwan (Dept. of Design., Graduate School, Pusan National University) ;
  • Kim, Cheolki (Dept. of Design., College of Art, Pusan National University)
  • Received : 2017.09.28
  • Accepted : 2018.01.26
  • Published : 2018.03.31

Abstract

It is required for today's video contents to interact with a viewer in order to provide more personalized experience to viewer(s) than before. In order to do so by providing friendly experience to a viewer from video contents' systemic perspective, understanding and analyzing the situation of the viewer have to be preferentially considered. For this purpose, it is effective to analyze the situation of a viewer by understanding the state of the viewer based on the viewer' s behavior(s) in the process of watching the video contents, and classifying the behavior(s) into the viewer's emotion and state during the flow. The term 'Flow-emotion-state' presented in this study is the state of the viewer to be assumed based on the emotions that occur subsequently in relation to the target video content in a situation which the viewer of the video content is already engaged in the viewing behavior. This Flow-emotion-state of a viewer can be expected to be utilized to identify characteristics of the viewer's Flow-situation by observing and analyzing the gesture and the facial expression that serve as the input modality of the viewer to the video content.

Keywords

References

  1. D. Kim, J. Lee, and J. Yang, "Effective Structure of Tensorflow Multimodal Deep Learning for Video Classification," Proceeding of Korea Information Science Society, pp. 1842-1844, 2016.
  2. M. Lim and P. Park, “Design of Parallel Input Pattern and Synchronization Method for Multimodal Interaction,” Journal of the Ergonomics Society of Korea, Vol. 25, No. 2, pp. 135-146, 2006. https://doi.org/10.5143/JESK.2006.25.2.135
  3. K. Lee, K. Jung, and H. Kim, “A Method with Input Multi-modal for Multi-media System,” Journal of Korea Multimedia Society, Vol. 2, No. 1, pp. 83-91, 1998.
  4. J, Suh and I. Kim, "A Study on Modality of Interactive Design," Journal of Korean Society of Design Science, Vol. 24, No. 1, pp. 105-116, 2011.
  5. D.L. Baggio, S. Emami, D.M. Escriva, K. Ievgen, N. Mahmood, J. Saragih, et al., OpenCV Computer Vision Project, Acorn Publishing Co, Seoul, 2016.
  6. C. Kim, J. Son, S. Lee, J. Cha, and S. Kim, “A Study on Emotional Vocabulary for Interactive Visual Media Viewing Behavior Models Construction,” Journal of Korea Society of Design Trend, Vol. 54, No. 3, pp. 27-38, 2017.
  7. T. Kim, Y. Bong, and M. Kim, “An Engagement Scale for TV Viewing : Multi-stage Surveys for Item Development and Validation,” Korean Journal of Broadcasting and Telecommunication Studies, Vol. 28, No. 2, pp. 50-97, 2014.
  8. M. Csikszentmihalyi, Finding ;Flow, Hainaim Publishing Co, Seoul, 1999.
  9. J. Hernandez, Z. Liu, G. Hulten, D. DeBarr, K. Krum, and Z. Zhang, "Measuring the Engagement Level of TV Viewers," Proceeding of 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, pp. 1-7, 2013.
  10. J.M. Carey, Human Factors in Information Systems: Emerging Theoretical Bases, Ablex Publishing Corporation, Norwood, New Jersey, 1995.
  11. H. Jo, Korean's Body Language, Sotong Publishing Co, Seoul, 2009.
  12. Y. Kobayashi, Shigusa No Eigo Hyogen Jiten(Shinso-Ban), Bookie Co, Seoul, 2013.
  13. Whole-interval recording, http://terms.naver.com/entry.nhn?docId=1924411&cid=42125&categoryId=42125 (accessed Dec., 12, 2017).