DOI QR코드

DOI QR Code

Interaction Intent Analysis of Multiple Persons using Nonverbal Behavior Features

인간의 비언어적 행동 특징을 이용한 다중 사용자의 상호작용 의도 분석

  • Yun, Sang-Seok (Center for Intelligent Robotics, Korea Institute of Science and Technology) ;
  • Kim, Munsang (Center for Intelligent Robotics, Korea Institute of Science and Technology) ;
  • Choi, Mun-Taek (Mechanical Engineering, Sungkyunkwan University) ;
  • Song, Jae-Bok (Mechanical Engineering, Korea University)
  • 윤상석 (한국과학기술연구원 지능로봇 사업단) ;
  • 김문상 (한국과학기술연구원 지능로봇 사업단) ;
  • 최문택 (성균관대학교 기계공학부) ;
  • 송재복 (고려대학교 기계공학부)
  • Received : 2013.03.08
  • Accepted : 2013.06.19
  • Published : 2013.08.01

Abstract

According to the cognitive science research, the interaction intent of humans can be estimated through an analysis of the representing behaviors. This paper proposes a novel methodology for reliable intention analysis of humans by applying this approach. To identify the intention, 8 behavioral features are extracted from the 4 characteristics in human-human interaction and we outline a set of core components for nonverbal behavior of humans. These nonverbal behaviors are associated with various recognition modules including multimodal sensors which have each modality with localizing sound source of the speaker in the audition part, recognizing frontal face and facial expression in the vision part, and estimating human trajectories, body pose and leaning, and hand gesture in the spatial part. As a post-processing step, temporal confidential reasoning is utilized to improve the recognition performance and integrated human model is utilized to quantitatively classify the intention from multi-dimensional cues by applying the weight factor. Thus, interactive robots can make informed engagement decision to effectively interact with multiple persons. Experimental results show that the proposed scheme works successfully between human users and a robot in human-robot interaction.

Keywords

References

  1. R. B. Murray, M. M. Huelskoetter, and D. O'Driscoll, Nursing Process in Later Maturity, Prentice Hall, 1980.
  2. S.-J. Blakemore and J. Decety, "From the perception of action to the understanding of intention," Nature Rev., Neurosci., vol. 2, pp. 561-567, 2001. https://doi.org/10.1038/35080587
  3. S. Weis, "Theory and measurement of social intelligence as a cognitive performance construct," Ph.D. Dissertation, Otto-von-Guericke Univ., 2008.
  4. A. Mehrabian, Nonverbal Communication. New York: Aldine-Atherton, 1972.
  5. M. Argyle, "Non-verbal communication in human social interaction," In R. A. Hinde (Ed.), Non-verbal communication. London: University Press, 1972.
  6. C. Breazeal, J. Gray, and M. Berlin, "An embodied cognition approach to mindreading skills for socially intelligent robots," Int. J. Robot. Res., vol. 28, no. 5, pp. 656-680, 2009. https://doi.org/10.1177/0278364909102796
  7. A. L. Thomaz and M. Cakmak, "Social learning mechanisms for robots," Proc of Int. Symp. Robot. Res.(ISRR), pp. 1-14, 2009.
  8. G. Castellano, I. Leite, A. Paiva, and P.W. McOwan, "Affective teaching: learning more effectively from empathic robots," Awareness Magazine: Self-Awareness in Auto. Syst., Inter. Robot., doi:10.2417/3201112.003948, Jan. 2012.
  9. Y.-M. Kim and D.-S. Kwon, "A fuzzy intimacy space model to develop human-robot affective relationship," Proc. of World Automation Congress, pp. 1-6, 2010.
  10. N. Bellotto and H. Hu, "Multisensor-Based Human Detection and Tracking for Mobile Service Robots," IEEE Trans. Syst., Man, Cybern. B, vol. 39, no. 1, pp. 167-181, 2009. https://doi.org/10.1109/TSMCB.2008.2004050
  11. B.-G. Lee, J. Choi, S. Yoon, M.-T. Choi, M. Kim, and D. Kim, "Audio-visual fusion for sound source localization and improved attention," Transactions of the KSME A, vol. 35, no. 7, pp. 737-743, 2011. https://doi.org/10.3795/KSME-A.2011.35.7.737
  12. M. T. Palmer and G. A. Barnett, Progress in Communication Sciences, vol. 14. G. A. Barnett (Ed.), Norwood, NJ:Ablex, 1998.
  13. Y.-M. Kim and D.-S. Kwon, "A fuzzy intimacy space model to develop human-robot affective relationship," Proc. of World Automation Congress, pp. 1-6, 2010.
  14. J. Yoon and D. Kim, "Frontal face classifier using adaboost with mct features," Proc of Int. Conf. Control, Automation, Robotics and Vision, pp. 2084-2087, 2010.
  15. J. Shin and D. Kim, "Expression recognition system on mobile terminals," Workshop on Image Processing and Image Understanding (in Korean), Feb. 2009.
  16. OpenNI Framework, http://www.openni.org
  17. G. H. Lim and I. H. Suh, "Robust robot knowledge instantiation for intelligent service robots," Intel. Serv. Robot., vol. 3, pp. 115-123, 2010. https://doi.org/10.1007/s11370-010-0063-6