DOI QR코드

DOI QR Code

기존 3차원 인터랙션 동작인식 기술 현황 파악을 위한 메타분석

Analysis of 3D Motion Recognition using Meta-analysis for Interaction

  • Kim, Yong-Woo (Department of Computer Science, Sangmyung University) ;
  • Whang, Min-Cheol (Department of Digital Media Technology, Sangmyung University) ;
  • Kim, Jong-Hwa (Department of Emotion Engineering, Sangmyung University) ;
  • Woo, Jin-Cheol (Human-Computer Interaction Laboratory, University of Arkansas) ;
  • Kim, Chi-Jung (Department of Computer Science, Sangmyung University) ;
  • Kim, Ji-Hye (Department of Computer Science, Sangmyung University)
  • 투고 : 2010.04.01
  • 심사 : 2010.11.04
  • 발행 : 2010.12.31

초록

Most of the research on three-dimensional interaction field have showed different accuracy in terms of sensing, mode and method. Furthermore, implementation of interaction has been a lack of consistency in application field. Therefore, this study is to suggest research trends of three-dimensional interaction using meta-analysis. Searching relative keyword in database provided with 153 domestic papers and 188 international papers covering three-dimensional interaction. Analytical coding tables determined 18 domestic papers and 28 international papers for analysis. Frequency analysis was carried out on method of action, element, number, accuracy and then verified accuracy by effect size of the meta-analysis. As the results, the effect size of sensor-based was higher than vision-based, but the effect size was extracted to small as 0.02. The effect size of vision-based using hand motion was higher than sensor-based using hand motion. Therefore, implementation of three-dimensional sensor-based interaction and vision-based using hand motions more efficient. This study was significant to comprehensive analysis of three-dimensional motion recognition for interaction and suggest to application directions of three-dimensional interaction.

키워드

참고문헌

  1. 김진우, Human Computer Interaction, 안그라픽스, 2005.
  2. Preece, J., et al. Interaction Design beyond human-computer interaction, 2nd ed., John Wiley&Sons, 2006.
  3. Bowmand Doug, A., et al. 3D user interfaces: theory and practice, Pearson Education, 2005.
  4. 홍동표, 우운택, "제스처기반 사용자 인터페이스에 대한 연구 동향", Telecommunications Review . 제18권 3호 . 2008년 6월, 2008.
  5. 조경은, 조형제, "확률적 정규 문법 추론법에 의한 사람 몸동작 인식", 정보과학회논문지: 컴퓨팅의 실제, 7(pp. 248-259), 2001.
  6. 남연희, 채인석, "메타분석을 활용한 중증장애인 자립생활에 영향을 미치는 요인에 관한 연구", Journal of Public Welfare Administration, 18(pp. 179-198), 2008.
  7. Kitchenham, B., "Procedures for Performing Systematic Reviews", Joint Technical Report, 2004.
  8. Madhukar, Pai., et al. "Systematic reviews and meta-analyses_An illustrated, step-by-step guide", The National Medical Journal of India, 2004.
  9. 강진숙, "국내 인터넷 연구의 주제와 방법에 대한 메타분석", 한국언론학보, 52(pp. 173-198), 2008.
  10. 황상재, 박석철, "국내 인터넷 연구의 메타분석", 한국방송학보, 68-92, 2004.
  11. 오성삼, "메타분석의 이론과 실제", 2002.
  12. 권태희, 임좌상, UML 표기법의 유용성 평가에 대한 연구: Systematic Review. 2008: 한국정보과학회; p. 121-126, 2008.
  13. 이동욱 et al. 마우스 포인터 제어를 위한 실시간 손 인식 알고리즘. 2008: 한국방송공학회; p. 211-214, 2008.
  14. 장영대, 박지헌, "스테레오 카메라를 이용한 동작인식 인터페이스 및 비접촉식 마우스에 대한 개발", 한국정보기술학회논문지, 7(pp. 242-252), 2009.
  15. 이동석 et al. "스테레오 카메라를 이용한 이동객체의 실시간 추적과 거리 측정 시스템", 방송공학회논문지, 14(pp. 366-377), 2009. https://doi.org/10.5909/JBE.2009.14.3.366
  16. 김진우 et al. "윈도우 제어를 위한 시각적 비접촉 사용자 인터페이스", 정보과학회논문지: 소프트웨어 및 응용, 36(pp. 471-478), 2009.
  17. 김상기 et al. "3차원 가속도 데이타를 이용한 HMM 기반의 동작인식", 정보과학회논문지: 컴퓨팅의 실제, 15(pp. 216-220), 2009.
  18. 임새미 et al. "3차원 가속 센서 및 RFID 센서를 이용한 ADL 자동 분류", 전자공학회논문지-CI, 45(pp. 135-141), 2008.
  19. 석흥일 et al. "3차원 손 모델링 기반의 실시간 손 포즈 추적 및 손가락 동작인식", 정보과학회논문지: 소프트웨어 및 응용, 35(pp. 780-788), 2008.
  20. 배수정 et al. 다중 카메라 기반 3차원 인간 행동 인식 연구. 2008: 한국정보과학회; p. 395-399, 2008.
  21. 김순기, 김대진, 인간 로봇 상호작용을 위한 Disparity 정보를 이용한 동작인식. 2008: 한국정보과학회; p. 142-146, 2008.
  22. 김혜정, 이경미, RBF 신경망을 이용한 3D 동작 추정. 2006: 한국정보과학회; p. 485-488, 2006.
  23. 조성정 et al. 관성 센서를 이용한 공간상의 제스처 입력 시스템. 2004: 한국정보과학회; p. 709-711, 2004.
  24. 박현진 et al. "가상현실에서 행위와 인지에 기반한 인공생명과의 상호작용시스템", 정보과학회논문지: 소프트웨어 및 응용, 28(pp. 493-500), 2001.
  25. 노명철 et al. 휴먼 행동 분석을 위한 3차원 제스처 데이터베이스의 설계 및 구축. 2005: 한국정보과학회; p. 895-897, 2005.
  26. 박세영 et al. 유비쿼터스 스마트 홈을 위한 위치와 모션인식 기반의 실시간 휴먼 트랙커. 2008: 한국정보과학회; p. 444-448, 2008.
  27. 송효섭 et al. "손의 형상과 움직임 방향 정보를 이용한 수화 인식", 정보과학회논문지(B), 26(pp. 804-810), 1999.
  28. 이래경, 김성신, "손동작인식을 통한 Human-Computer Interaction 구현", 한국지능시스템학회 논문지, 11(pp. 28-32), 2001.
  29. 엄재성 et al. "실시간 인체 3차원 모델링 시스템", 한국정보기술학회논문지, 6(pp. 26-34), 2008.
  30. 조성은, "장애아동 부모교육 프로그램의 효과에 관한 메타분석", 특수교육저널: 이론과 실천, 5(pp. 415-429), 2004.
  31. Bowden, R., et al. "Non-linear statistical models for the 3D reconstruction of human pose and motion from monocular image sequences", Image and Vision Computing, 18(pp. 729-737), 2000. https://doi.org/10.1016/S0262-8856(99)00076-1
  32. Chua, C-S., et al. "Model-based 3D hand posture estimation from a single 2D image", Image and Vision Computing, 20(pp. 191-202), 2002. https://doi.org/10.1016/S0262-8856(01)00094-4
  33. Chen, F-S., et al. "Hand gesture recognition using a real-time tracking method and hidden Markov models", Image and Vision Computing, 21(pp. 745-758), 2003 https://doi.org/10.1016/S0262-8856(03)00070-2
  34. McCane, B. and Caelli, T., "Diagnostic tools for evaluating and updating hidden Markov models", Pattern Recognition, 37(pp. 1325-1337), 2004. https://doi.org/10.1016/j.patcog.2003.12.017
  35. Lics, A. and Sziranyi, T., "User-adaptive hand gesture recognition system with interactive training", Image and Vision Computing, 23(pp. 1102-1114), 2005. https://doi.org/10.1016/j.imavis.2005.07.016
  36. Shamaie, A. and Sutherland, A., "Hand tracking in bimanual movements", Image and Vision Computing, 23(pp. 1131-1149), 2005. https://doi.org/10.1016/j.imavis.2005.07.010
  37. Kehl, R. and Gool, L. V., "Markerless tracking of complex human motions from multiple views", Computer Vision and Image Understanding, 104(pp. 190-209), 2006. https://doi.org/10.1016/j.cviu.2006.07.010
  38. Ong, S. C. W., et al. "Understanding gestures with systematic variations in movement dynamics", Pattern Recognition, 39(pp. 1633-1648), 2006. https://doi.org/10.1016/j.patcog.2006.02.010
  39. Patwardhan, K. S. and Dutta, Roy. S., "Hand gesture modelling and recognition involving changing shapes and trajectories, using a Predictive EigenTracker", Pattern Recognition Letters, 28(pp. 329-334), 2007. https://doi.org/10.1016/j.patrec.2006.04.002
  40. Shan, C., et al. "Real-time hand tracking using a mean shift embedded particle filter", Pattern Recognition, 40(pp. 1958-1970), 2007. https://doi.org/10.1016/j.patcog.2006.12.012
  41. Wang, Q., et al. "Viewpoint invariant sign language recognition", Computer Vision and Image Understanding, 108(pp. 87-97), 2007. https://doi.org/10.1016/j.cviu.2006.11.009
  42. Yin, X. and Xie, M., "Finger identification and hand posture recognition for human-robot interaction", Image and Vision Computing, 25(pp. 1291-1300), 2007. https://doi.org/10.1016/j.imavis.2006.08.003
  43. Caillette, F., et al. "Real-time 3-D human body tracking using learnt models of behaviour", Computer Vision and Image Understanding, 109(pp. 112-125), 2008. https://doi.org/10.1016/j.cviu.2007.05.005
  44. Derpanis, K. G., et al. "Definition and recovery of kinematic features for recognition of American sign language movements", Image and Vision Computing, 26(pp. 1650-1662), 2008. https://doi.org/10.1016/j.imavis.2008.04.007
  45. Ge, S. S., et al. "Hand gesture recognition and tracking based on distributed locally linear embedding", Image and Vision Computing, 26(pp. 1607-1620), 2008. https://doi.org/10.1016/j.imavis.2008.03.004
  46. Hsieh, J-W. and Hsu, Y-T., "Boosted string representation and its application to video surveillance", Pattern Recognition, 41(pp. 3078-3091), 2008. https://doi.org/10.1016/j.patcog.2008.03.026
  47. Malassiotis, S. and Strintzis, M. G., "Real-time hand posture recognition using range data", Image and Vision Computing, 26(pp. 1027-1037), 2008. https://doi.org/10.1016/j.imavis.2007.11.007
  48. Aran, O., et al. "A belief-based sequential fusion approach for fusing manual signs and non-manual signals", Pattern Recognition, 42(pp. 812-822), 2009. https://doi.org/10.1016/j.patcog.2008.09.010
  49. Bandera, J. P., et al. "Fast gesture recognition based on a two-level representation", Pattern Recognition Letters, 30(pp. 1181-1189), 2009. https://doi.org/10.1016/j.patrec.2009.05.017
  50. Bernier, O., et al. "Fast nonparametric belief propagation for real-time stereo articulated body tracking", Computer Vision and Image Understanding, 113(pp. 29-47), 2009. https://doi.org/10.1016/j.cviu.2008.07.001
  51. Ding, L. and Martinez, A. M., "Modelling and recognition of the linguistic components in American Sign Language", Image and Vision Computing, 27(pp. 1826-1844), 2009. https://doi.org/10.1016/j.imavis.2009.02.005
  52. Just, A. and Marcel, S. A., "A comparative study of two state-of-the-art sequence processing techniques for hand gesture recognition", Computer Vision and Image Understanding, 113(pp. 532-543), 2009. https://doi.org/10.1016/j.cviu.2008.12.001
  53. Keskin, C. and Akarun, L., "STARS: Sign tracking and recognition system using input-output HMMs", Pattern Recognition Letters, 30(pp. 1086-1095), 2009. https://doi.org/10.1016/j.patrec.2009.03.016
  54. Liu, C. and Yuen, P. C., "Human action recognition using boosted EigenActions", Image and Vision Computing, In Press, Corrected Proof(pp. 825-835). 2010.
  55. Qian, H., et al. "Recognition of human activities using SVM multi-class classifier", Pattern Recognition Letters, 31(pp. 100-111), 2009. https://doi.org/10.1016/j.patrec.2009.09.019
  56. Vincze, M., et al. "Integrated vision system for the semantic interpretation of activities where a person handles objects", Computer Vision and Image Understanding, 113(pp. 682-692), 2009. https://doi.org/10.1016/j.cviu.2008.10.008
  57. Mikic, I., et al. "Human body model acquisition and tracking using voxel data", International Journal of Computer Vision, 2003.
  58. Ren, L., et al. "Learning silhouette features for control of human motion", ACM Trans. Graph, 2005.

피인용 문헌

  1. The Effect of Gesture-Command Pairing Condition on Learnability when Interacting with TV vol.31, pp.4, 2012, https://doi.org/10.5143/JESK.2012.31.4.525