Interaction with Agents in the Virtual Space Combined by Recognition of Face Direction and Hand Gestures

얼굴 방향과 손 동작 인식을 통합한 가상 공간에 존재하는 Agent들과의 상호 작용

  • 조강현 (울산대학교 제어계측공학과) ;
  • 김성은 (ETRI 가상현실연구센터) ;
  • 이인호 (ETRI 컴퓨터소프트웨어연구소)
  • Published : 2002.05.01

Abstract

In this paper, we describe a system that can interact with agents in the virtual space incorporated in the system. This system is constructed by an analysis system for analyzing human gesture and an interact system for interacting with agents in the virtual space using analyzed information. An implemented analysis system for analyzing gesture extracts a head and hands region after taking image sequence of an operator's continuous behavior using CCD cameras. In interact system, we construct the virtual space that exist an avatar which incarnating operator himself, an autonomous object (like a Puppy), and non-autonomous objects which are table, door, window and object. Recognized gesture is transmitted to the avatar in the virtual space, then transit to next state based on state transition diagram. State transition diagram is represented in a graph in which each state represented as node and connect with link. In the virtual space, the agent link an avatar can open and close a window and a door, grab or move an object like a ball, order a puppy to do and respond to the Puppy's behavior as does the puppy.

본 논문에서는 인간의 행동을 컴퓨터에게 인식시켜 가상의 공간에 존재하는 에이전트(agent)들과 상호 작용이 가능한 시스템을 구현하였다. 이 시스템은 크게 행동을 인식하는 인식 시스템과 인식 정보를 통해 미리 구성한 가상 공간에 존재하는 여러 에이전트간의 상호 작용을 하는 시스템으로 구성되어있다. 인식 시스템은 동작자의 연속적인 행동을 CCD카메라로부터 입력받아 각각의 프레임에 대해 머리와 손의 특징을 추출한다. 그리고, 추출된 정보를 연속적인 시간의 흐름에 대해 해석을 한 후, 동작을 인식한다. 상호 작용 시스템을 위해 동작자의 분신인 아바타(avatar), 자율적으로 행동하는 퍼피(puppy), 그리고 비자율적인 객체인 탁자, 문, 창문, 공과 같은 이동이 가능한 오브젝트(object)들이 존재하는 가상 공간을 구현하였다. 인식된 동작은 상호 작용 시스템을 통해 가상 공간의 아바타에게 전달이 된다. 아바타의 동작 천이는 상태 천이도를 바탕으로 이루어진다. 상태 천이도는 각각의 동작이 노드로 정의되고, 그 노드들을 종속적으로 연결한 그래프로 구성된다. 아바타는 문과 창문을 여닫고, 오브젝트를 잡거나 이동할 수 있다. 또 퍼피에게 명령을 내리거나 퍼피의 동작에 대한 응답을 할 수 있다.

Keywords

References

  1. K.H. Jo, Y. Kuno and Y. Shirai, 'Manipulative hand gesture recognition using task knowledge for human computer interaction,' Proc 3rd IEEE International Conference on Automatic Face and Gesture Recognition, pp. 468-473, 1998 https://doi.org/10.1109/AFGR.1998.670992
  2. Francis K.H. Quek, 'Unencumbered Gestural Interaction,' IEEE MultiMedia, Vol. 3, No. 4, pp. 36-47, Winter 1996 https://doi.org/10.1109/93.556459
  3. Ming-Hsuan Yang and Narendra Ahuja, 'Extraction and Classification of Visual Motion Patterns for Hand Gesture Recognition,' In Proceeding of the IEEE CVPR, pp. 892-897, Santa Barbara, 1998 https://doi.org/10.1109/CVPR.1998.698710
  4. Maylor K. Leung and Yee-Hong Yang, 'First Sight: A Human Body Outline Labeling System,' IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 17, No. 4, April 1995 https://doi.org/10.1109/34.385981
  5. David J. Beyme,'Face Recognition Under Varying Pose' A.I. Memo No. 1461, Artificial Intelligence Lab., MIT, 1993
  6. Greg Welch and Gray Bishop, 'An Introduction to the Kalman Filter,' TR 95-041, University of North Carolina at Chapel Hill, February 8, 2001
  7. C.H. Sul, S.K. Jung and K.Y. Wohn, 'Synthesis of Human Motion using Kalman Filter,' Proceedings of CapTech'98, First International Workshop on Modeling and Motion Capture Technologies for Virtual Environments, 26-28 November 1998, Geneva,Switzerland
  8. Matheen Sidiqui, 'Calibration an its Application to Stereo,' ENG SC467, Senior Honors Thesis, Fall 1999
  9. Zhengyou Zhang, 'A Flexible New Technique for Camera Calibration,' Technical Report, MSR-TR-98-71, Microsoft Research, One Microsoft Way, Redmond, WA 98052-6399. USA
  10. Emanuele Trucco and Alessandro Verri, 'Introductory Techniques for 3-D Computer Vision,' Prentice Hall, New Jersey, 1998
  11. Ismail Haritaoglu, David Harwood, and Larry S. Davis, 'W4: Real-Time Surveillance of People and Their Activities,' IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 22, No. 8, August 2000 https://doi.org/10.1109/34.868683
  12. J.K. Aggarwal and Q. Cai, 'Human Motion Analysis: A Review,' Computer Vision and Image Understanding, Vol. 73, No. 3, pp. 428-440, March 1999 https://doi.org/10.1006/cviu.1998.0744
  13. Christopher Wren, Ali Azarbayejanj, Trevor Darrell, Alex Pentland, 'Pfinder: Real-Time Tracking of the Human Body,' IEEE Trans. Pattern Analysis and Machine Intelligence , Vol. 19, No. 7, pp. 780-785, July 1997 https://doi.org/10.1109/34.598236
  14. Christopher R. Wren and Alex P. Pentland, 'Dynamic Models of Human Motion,' IEEE International Conference on Automatic Face and Gesture Recognition, Japan, April 14-16, 1998 https://doi.org/10.1109/AFGR.1998.670920
  15. Ali Azarbayejani and Alex Pentland, 'Real-time self-calibrating stereo person tracking using 3-D shape estimation from blob features,' ICPR'96, Vienna,Austria, August 1996 https://doi.org/10.1109/ICPR.1996.547022
  16. Pattie Maes, Bruce Blumberg, Trevor Darrell and Alex Pentland, 'The Alive System: Wireless, Full-body Interaction with Autonomous Agents,' ACM Multimedia Systems, 5:105-112, 1997 https://doi.org/10.1007/s005300050046
  17. F. Parke, 'Parameterized models for facial animation,' IEEE Computer Graphics and Applications, 2(9): 61-68, November, 1982 https://doi.org/10.1109/MCG.1982.1674492
  18. D. Terzopoulos and K. Waters, 'Physically-based facial modeling, analysis,and animation,' Visualization and Computer Animation, 1:73-80, 1990 https://doi.org/10.1002/vis.4340010208
  19. Y. Lee D. Terzopoulosv,,and K. Waters, 'Realistic Modeling for Facial Animation,' In Computer Graphics Proceedings, Annual Conference Series, 1995, ACM SIGGRAPH, pp. 55-62 https://doi.org/10.1145/218380.218407
  20. Lawrence R. Rabiner, 'A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,' Proc. of the IEEE, Vol. 77, No. 2, pp. 257-286, Feb. 1989 https://doi.org/10.1109/5.18626
  21. 김성은, 조강현, 전희성, 최원호, 박경섭, '인간의 행동인식을 위한 얼굴방향과 손 동작 해석,' 제어 자동화 시스템 공학논문지, Vol. 7, No. 4, pp. 309-318, April. 2001