DOI QR코드

DOI QR Code

제한된 모션 센서와 애니메이션 데이터를 이용한 캐릭터 동작 제어

Character Motion Control by Using Limited Sensors and Animation Data

  • 배태성 (가톨릭대학교 미디어기술콘텐츠학과) ;
  • 이은지 (가톨릭대학교 미디어기술콘텐츠학과) ;
  • 김하은 (가톨릭대학교 미디어기술콘텐츠학과) ;
  • 박민지 (티팟스튜디오(주)) ;
  • 최명걸 (가톨릭대학교 미디어기술콘텐츠학과)
  • Bae, Tae Sung (Dept. of Media Technology and Media Contents, The Catholic University of Korea) ;
  • Lee, Eun Ji (Dept. of Media Technology and Media Contents, The Catholic University of Korea) ;
  • Kim, Ha Eun (Dept. of Media Technology and Media Contents, The Catholic University of Korea) ;
  • Park, Minji (TpotStudio) ;
  • Choi, Myung Geol (Dept. of Media Technology and Media Contents, The Catholic University of Korea)
  • 투고 : 2019.06.08
  • 심사 : 2019.06.22
  • 발행 : 2019.07.14

초록

디지털 스토리텔링에 등장하는 3차원 가상 캐릭터에는 외형뿐만 아니라 자세나 동작에서도 캐릭터의 개성이 반영된 고유의 스타일이 부여된다. 그러나 사용자가 웨어러블 동작센서를 사용하여 직접 캐릭터의 신체 동작을 제어하는 경우 캐릭터 고유의 스타일이 무시될 수 있다. 본 연구에서는 가상 캐릭터를 위해 제작된 소량의 애니메이션 데이터만을 이용하는 검색 기반 캐릭터 동작 제어 기술을 사용하여 캐릭터 고유의 스타일을 유지하는 기술을 제시한다. 대량의 학습 데이터를 필요로하는 기계학습법을 피하는 대신 소량의 애니메이션 데이터로부터 사용자의 자세와 유사한 캐릭터 자세를 직접 검색하여 사용하는 기술을 제안한다. 제시된 방법을 검증하기 위해 전문가에 의해 제작된 가상현실 게임용 캐릭터 모델과 애니메이션 데이터를 사용하여 실험하였다. 평범한 사람의 모션캡쳐 데이터를 사용했을 때와의 결과를 비교하여 캐릭터 스타일이 보존됨을 증명하였다. 또한 동작센서의 개수를 달리한 실험을 통해 제시된 방법의 확장성을 증명하였다.

A 3D virtual character playing a role in a digital story-telling has a unique style in its appearance and motion. Because the style reflects the unique personality of the character, it is very important to preserve the style and keep its consistency. However, when the character's motion is directly controlled by a user's motion who is wearing motion sensors, the unique style can be discarded. We present a novel character motion control method that uses only a small amount of animation data created only for the character to preserve the style of the character motion. Instead of machine learning approaches requiring a large amount of training data, we suggest a search-based method, which directly searches the most similar character pose from the animation data to the current user's pose. To show the usability of our method, we conducted our experiments with a character model and its animation data created by an expert designer for a virtual reality game. To prove that our method preserves well the original motion style of the character, we compared our result with the result obtained by using general human motion capture data. In addition, to show the scalability of our method, we presented experimental results with different numbers of motion sensors.

키워드

참고문헌

  1. A. Vogele, B. Kruger, and R. Klein, "Efficient unsupervised temporal segmentation of human motion," in Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation. Eurographics Association, 2014, pp. 167-176.
  2. B. Kruger, A. Vogele, T.Willig, A. Yao, R. Klein, and A. Weber, "Efficient unsupervised temporal segmentation of motion data," IEEE Transactions on Multimedia, vol. 19, no. 4, pp. 797-812, 2016. https://doi.org/10.1109/TMM.2016.2635030
  3. A. Aristidou, D. Cohen-Or, J. K. Hodgins, Y. Chrysanthou, and A. Shamir, "Deep motifs and motion signatures," in SIGGRAPH Asia 2018 Technical Papers. ACM, 2018, p. 187.
  4. M. Muller and T. Roder, "Motion templates for automatic classification and retrieval of motion capture data," in Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation. Eurographics Association, 2006, pp. 137-146.
  5. Y. Wang and M. Neff, "Deep signatures for indexing and retrieval in large motion databases," in Proceedings of the 8th ACM SIGGRAPH Conference on Motion in Games. ACM, 2015, pp. 37-45.
  6. M. G. Choi and K. H. Lee, "Points-based user interface for character posing," Computer Animation and Virtual Worlds, vol. 27, no. 3-4, pp. 213-220, 2016. https://doi.org/10.1002/cav.1693
  7. M. G. Choi and T. Kwon, "Motion rank: applying page rank to motion data search," The Visual Computer, pp. 1-12, 2019.
  8. J. Lee and S. Y. Shin, "A hierarchical approach to interactive motion editing for human-like figures," in Siggraph, vol. 99, 1999, pp. 39-48.
  9. J. Min, Y.-L. Chen, and J. Chai, "Interactive generation of human animation with deformable motion models," ACM Transactions on Graphics (TOG), vol. 29, no. 1, p. 9, 2009.
  10. M. G. Choi, M. Kim, K. L. Hyun, and J. Lee, "Deformable motion: Squeezing into cluttered environments," in Computer Graphics Forum, vol. 30, no. 2. Wiley Online Library, 2011, pp. 445-453.
  11. R. Villegas, J. Yang, D. Ceylan, and H. Lee, "Neural kinematic networks for unsupervised motion retargetting," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8639-8648.
  12. J. Lee, J. Chai, P. S. Reitsma, J. K. Hodgins, and N. S. Pollard, "Interactive control of avatars animated with human motion data," in ACM Transactions on Graphics (ToG), vol. 21, no. 3. ACM, 2002, pp. 491-500. https://doi.org/10.1145/566654.566607
  13. L. Kovar, M. Gleicher, and F. Pighin, "Motion graphs," in ACM SIGGRAPH 2008 classes. ACM, 2008, p. 51.
  14. J. Min and J. Chai, "Motion graphs++: a compact generative model for semantic motion analysis and synthesis," ACM Transactions on Graphics (TOG), vol. 31, no. 6, p. 153, 2012.
  15. J. McCann and N. Pollard, "Responsive characters from motion fragments," in ACM Transactions on Graphics (TOG), vol. 26, no. 3. ACM, 2007, p. 6. https://doi.org/10.1145/1276377.1276385
  16. D. Holden, T. Komura, and J. Saito, "Phase-functioned neural networks for character control," ACM Transactions on Graphics (TOG), vol. 36, no. 4, p. 42, 2017.
  17. M. G. Choi, K. Yang, T. Igarashi, J. Mitani, and J. Lee, "Retrieval and visualization of human motion data via stick figures," in Computer Graphics Forum, vol. 31, no. 7. Wiley Online Library, 2012, pp. 2057-2065.
  18. Y. Seol, C. O'Sullivan, and J. Lee, "Creature features: online motion puppetry for non-human characters," in Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation. ACM, 2013, pp. 213-221.
  19. Q. Ke, M. Bennamoun, S. An, F. Sohel, and F. Boussaid, "A new representation of skeleton sequences for 3d action recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3288-3297.
  20. W. Yoshizaki, Y. Sugiura, A. C. Chiou, S. Hashimoto, M. Inami, T. Igarashi, Y. Akazawa, K. Kawachi, S. Kagami, and M. Mochimaru, "An actuated physical puppet as an input device for controlling a digital manikin," in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2011, pp. 637-646.
  21. N. Numaguchi, A. Nakazawa, T. Shiratori, and J. K. Hodgins, "A puppet interface for retrieval of motion capture data," in Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. ACM, 2011, pp. 157-166.
  22. J. Kim, Y. Seol, and J. Lee, "Realtime performance animation using sparse 3d motion sensors," in International Conference on Motion in Games. Springer, 2012, pp. 31-42.
  23. F. Wouda, M. Giuberti, G. Bellusci, and P. Veltink, "Estimation of full-body poses using only five inertial sensors: an eager or lazy learning approach?" Sensors, vol. 16, no. 12, p. 2138, 2016. https://doi.org/10.3390/s16122138
  24. L. Guo and S. Xiong, "Accuracy of base of support using an inertial sensor based motion capture system," Sensors, vol. 17, no. 9, p. 2091, 2017. https://doi.org/10.3390/s17092091
  25. K. Amaya, A. Bruderlin, and T. Calvert, "Emotion from motion," in Graphics interface, vol. 96. Toronto, Canada, 1996, pp. 222-229.
  26. C. Rose, M. F. Cohen, and B. Bodenheimer, "Verbs and adverbs: Multidimensional motion interpolation," IEEE Computer Graphics and Applications, vol. 18, no. 5, pp. 32-40, 1998. https://doi.org/10.1109/38.708559
  27. A. Aristidou, Q. Zeng, E. Stavrakis, K. Yin, D. Cohen-Or, Y. Chrysanthou, and B. Chen, "Emotion control of unstructured dance movements," in Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation. ACM, 2017, p. 9.
  28. L. Hoyet, K. Ryall, K. Zibrek, H. Park, J. Lee, J. Hodgins, and C. O'sullivan, "Evaluating the distinctiveness and attractiveness of human motions on realistic virtual bodies," ACM Transactions on Graphics (TOG), vol. 32, no. 6, p. 204, 2013. https://doi.org/10.1145/2508363.2508367
  29. M. Brand and A. Hertzmann, "Style machines," in Proceedings of the 27th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., 2000, pp. 183-192.
  30. E. Hsu, K. Pulli, and J. Popovic, "Style translation for human motion," in ACM Transactions on Graphics (TOG), vol. 24, no. 3. ACM, 2005, pp. 1082-1089. https://doi.org/10.1145/1073204.1073315
  31. I. Ismail, M. S. Sunar, H. W. Qian, and M. A. M. Arsad, "3d character motion deformation technique for motion style alteration," in 2015 4th International Conference on Interactive Digital Media (ICIDM). IEEE, 2015, pp. 1-4.
  32. M. E. Yumer and N. J. Mitra, "Spectral style transfer for human motion between independent actions," ACM Transactions on Graphics (TOG), vol. 35, no. 4, p. 137, 2016.
  33. S. Xia, C. Wang, J. Chai, and J. Hodgins, "Realtime style transfer for unlabeled heterogeneous human motion," ACM Transactions on Graphics (TOG), vol. 34, no. 4, p. 119, 2015.
  34. D. Holden, I. Habibie, I. Kusajima, and T. Komura, "Fast neural style transfer for motion data," IEEE computer graphics and applications, vol. 37, no. 4, pp. 42-49, 2017. https://doi.org/10.1109/MCG.2017.3271464
  35. I. Mason, S. Starke, H. Zhang, H. Bilen, and T. Komura, "Few-shot learning of homogeneous human locomotion styles," in Computer Graphics Forum, vol. 37, no. 7. Wiley Online Library, 2018, pp. 143-153.
  36. M. G. Choi, S.-T. Noh, T. Komura, and T. Igarashi, "Dynamic comics for hierarchical abstraction of 3d animation data," in Computer Graphics Forum, vol. 32, no. 7. Wiley Online Library, 2013, pp. 1-9.