Automatic Synchronization of Separately-Captured Facial Expression and Motion Data

표정과 동작 데이터의 자동 동기화 기술

  • 정태완 (세종대학교 디지털콘텐츠학과) ;
  • 박상일 (세종대학교 디지털콘텐츠학과)
  • Received : 2012.02.17
  • Accepted : 2012.02.28
  • Published : 2012.03.01

Abstract

In this paper, we present a new method for automatically synchronize captured facial expression data with its corresponding motion data. In a usual optical motion capture set-up, a detailed facial expression can not be captured simultaneously in the motion capture session because its resolution requirement is higher than that of the motion capture. Therefore, those are captured in two separate sessions and need to be synchronized in the post-process to be used for generating a convincing character animation. Based on the patterns of the actor's neck movement extracted from those two data, we present a non-linear time warping method for the automatic synchronization. We justify our method with the actual examples to show the viability of the method.

본 논문은 동작 포착 장비를 통해 각각 따로 포착된 얼굴과 동작 데이터의 자동 동기화 기술에 대해 다룬다. 광학식 동작 포착 기기를 사용할 때 얼굴 표정과 동작의 포착은 별도로 이루어지는 경우가 많으며, 이 경우 두 데이터 간의 동기화 수행하여야 자연스러운 애니메이션을 만들 수 있다. 본 연구에서는 두 데이터 간의 공통 부분인 목 및 얼굴의 전체적인 움직임 데이터를 기준으로 비선형 시간 변형을 통해 동기화를 수행하는 기법을 제안한다. 연구 결과를 간단한 실험 시나리오에 적용하여 기술의 효과성 여부를 검증하였다.

Keywords

References

  1. M. Brand, "Voice puppertry," in Proceedings of ACM SIGGRAPH 92, 1992, pp. 21-28.
  2. Z. Deng and U. Neumann, "efase: expressive facial animation synthesis and editing with phoneme-isomap controls," in Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2006, pp. 251-260.
  3. S. Kshirsagar and N. Magnenat-Thalmann, "Visyllable based speech animation," Computer Graphics Forum, vol. 22, no. 3, pp. 632-640, 2003.
  4. E. Chuang and C. Bregler, "Mood swings: expressive speech animation," ACM Transactions on Graphics, vol. 24, no. 2, pp. 331-347, 2005.
  5. P. Joshi, W. C. Tien, M. Desbrun, and F. Pighin, "Learning controls for blend shape based realistic facial animation," in Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2003, pp. 187-192.
  6. H. Pyun, Y. Kim, W. Chae, H. W. Kang, and S. Y. Shin, "An example-based approach for facial expression cloning," in Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2003, pp. 167-176.
  7. E. Ju and J. Lee, "Expressive facial gestures from motion capture data," Computer Graphics Forum, vol. 27, no. 2, pp. 381-388, 2008.
  8. F. Pighin, J. Hecker, D. Lischinski, D. Salesin, and R. Szeliski, "Synthesizing realistic facial expressions," in Proceedings of ACM SIGGRAPH 98, 1998, pp. 75-84.
  9. W. Ma, A. Jones, J. Chiang, T. Hawkins, S. Frederiksen, P. Peers, M. Vukovic, M. Ouhyoung, and P. Debevec, "Facial performance synthesis using deformation-driven polynomial displacement maps," ACM Transactions on Graphics, vol. 27, no. 5, p. 121, 2008.
  10. B. Horn, "Closed-form solution of absolute orientation using unit quaternions," Journal of the Optical Society of America, vol. A, no. 4, pp. 629-642, 1987.
  11. J. Lee and S. Y. Shin, "General construction of time-domain filters for orientation data," IEEE Transactions on Visualization and Computer Graphics, vol. 8, no. 2, pp. 119-128, 2002. https://doi.org/10.1109/2945.998665
  12. A. Bruderlin and L. Williams, "Motion signal processing," in Proceedings of ACM SIGGRAPH 95, 1995.
  13. 박상일, "광학식 동작 포착 장비를 이용한 노이즈에 강건한 얼굴 애니메이션 제작," 한국게임저널, vol. 10, no. 5, pp. 103-114, 2010.