DOI QR코드

DOI QR Code

Primitive Body Model Encoding and Selective / Asynchronous Input-Parallel State Machine for Body Gesture Recognition

바디 제스처 인식을 위한 기초적 신체 모델 인코딩과 선택적 / 비동시적 입력을 갖는 병렬 상태 기계

  • Received : 2012.07.13
  • Accepted : 2012.12.04
  • Published : 2013.02.28

Abstract

Body gesture Recognition has been one of the interested research field for Human-Robot Interaction(HRI). Most of the conventional body gesture recognition algorithms used Hidden Markov Model(HMM) for modeling gestures which have spatio-temporal variabilities. However, HMM-based algorithms have difficulties excluding meaningless gestures. Besides, it is necessary for conventional body gesture recognition algorithms to perform gesture segmentation first, then sends the extracted gesture to the HMM for gesture recognition. This separated system causes time delay between two continuing gestures to be recognized, and it makes the system inappropriate for continuous gesture recognition. To overcome these two limitations, this paper suggests primitive body model encoding, which performs spatio/temporal quantization of motions from human body model and encodes them into predefined primitive codes for each link of a body model, and Selective/Asynchronous Input-Parallel State machine(SAI-PSM) for multiple-simultaneous gesture recognition. The experimental results showed that the proposed gesture recognition system using primitive body model encoding and SAI-PSM can exclude meaningless gestures well from the continuous body model data, while performing multiple-simultaneous gesture recognition without losing recognition rates compared to the previous HMM-based work.

Keywords

References

  1. V. Ganapathi, C. Plagemann, D. Koller, and S. Thrun, "Real Time Motion Capture Using a Single Time-Of-Flight Camera," Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June, 2010.
  2. T. Gonzalez-Sanchez and D. Puig, "Real-Time Body Gesture Recognition Using Depth Camera," Electronics Letters, vol. 47, no. 12, pp. 697-698, June, 2011. https://doi.org/10.1049/el.2011.0967
  3. J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, and M. Finocchio, "Real-Time Human Pose Recognition in Parts from Single Depth Images," Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June, 2011.
  4. D.H. Kim, J. Song, and D. Kim, "Simultaneous Gesture Segmentation and Recognition Based on Forward Spotting Accumulative HMMs," Pattern Recognition, vol. 40, no. 11, pp. 3012-3026, November, 2007. https://doi.org/10.1016/j.patcog.2007.02.010
  5. B. Peng, G. Qian, and S. Rajko, "View-Invariant Full-Body Gesture Recognition from Video," Proceedings of the International Conference on Pattern Recognition, December, 2008.
  6. B. Peng and G. Quan, "Online Gesture Spotting from Visual Hull Data," Pattern Analysis and Machine Intelligence, vol. 33, no. 6, pp. 1175-1188, June, 2011. https://doi.org/10.1109/TPAMI.2010.199
  7. H.D. Yang, A.Y. Park, and S.W. Lee, "Gesture Spotting and Recognition for Human-Robot Interaction," Robotics, vol. 23, no. 2, p. 256-270, April, 2007.
  8. S. Mitra and T. Acharya, "Gesture Recognition: A Survey," Systems, man, and Cybernetics, vol. 37, no. 3, pp. 311-324, May, 2007. https://doi.org/10.1109/TSMCC.2007.893280

Cited by

  1. 화자의 긍정·부정 의도를 전달하는 실용적 텔레프레즌스 로봇 시스템의 개발 vol.10, pp.3, 2013, https://doi.org/10.7746/jkros.2015.10.3.171
  2. 주행 로봇을 위한 단일 카메라 영상에서 손든 자세 검출 알고리즘 vol.10, pp.4, 2013, https://doi.org/10.7746/jkros.2015.10.4.223