Data-driven Facial Expression Reconstruction for Simultaneous Motion Capture of Body and Face

동작 및 효정 동시 포착을 위한 데이터 기반 표정 복원에 관한 연구

  • 박상일 (세종대학교 디지털콘텐츠학과)
  • Received : 2012.05.04
  • Accepted : 2012.08.30
  • Published : 2012.09.01

Abstract

In this paper, we present a new method for reconstructing detailed facial expression from roughly captured data with a small number of markers. Because of the difference in the required capture resolution between the full-body capture and the facial expression capture, they hardly have been performed simultaneously. However, for generating natural animation, a simultaneous capture for body and face is essential. For this purpose, we provide a method for capturing the detailed facial expression only with a small number of markers. Our basic idea is to build a database for the facial expressions and apply the principal component analysis for reducing the dimensionality. The dimensionality reduction enables us to estimate the full data from a part of the data. We justify our method by applying it to dynamic scenes to show the viability of the method.

본 논문은 광학식 동작 포착 장비를 사용해 얼굴과 동작을 동시에 포착할 경우 발생하는 불완전한 표정 데이터의 복원에 관한 연구를 다룬다. 일반적으로 동작 포착과 표정 포착은 필요 해상도에서 차이가 나며, 이로 인해 동작과 표정을 동시에 포착하기 힘들었다. 본 연구에서는 표정과 동작의 동시 포착을 위해, 기존의 작은 마커를 촘촘히 얼굴에 부착하는 표정 포착 방식에서 탈피하여 적은 수의 마커만을 이용하여 표정을 포착하고, 이로부터 세밀한 얼굴 표정을 복원하는 방법을 제안한다. 본 방법의 핵심 아이디어는 얼굴 표정의 움직임을 미리 데이터베이스화하여, 적은 수의 마커로 표현된 얼굴 표정을 복원하는 것이다. 이를 위해 주성분분석을 사용하였으며, 제안된 기술을 실제 동적인 장면에 활용하여 표정의 복원이 잘 됨을 검증하였다.

Keywords

References

  1. M. Brand, "Voice puppertry," in Proceedings of ACM SIGGRAPH 92, 1992, pp. 21-28.
  2. Z. Deng and U. Neumann, "efase: expressive facial animation synthesis and editing with phoneme-isomap controls," in Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2006, pp. 251-260.
  3. S. Kshirsagar and N. Magnenat-Thalmann, "Visyllable based speech animation," Computer Graphics Forum, vol. 22, no. 3, pp. 632-640, 2003.
  4. E. Chuang and C. Bregler, "Mood swings: expressive speech animation," ACM Transactions on Graphics, vol. 24, no. 2, pp. 331-347, 2005. https://doi.org/10.1145/1061347.1061355
  5. P. Joshi, W. C. Tien, M. Desbrun, and F. Pighin, "Learning controls for blend shape based realistic facial animation," in Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2003, pp. 187-192.
  6. H. Pyun, Y. Kim, W. Chae, H. W. Kang, and S. Y. Shin, "An example-based approach for facial expression cloning," in Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2003, pp. 167-176.
  7. E. Ju and J. Lee, "Expressive facial gestures from motion capture data," Computer Graphics Forum, vol. 27, no. 2, pp. 381-388, 2008. https://doi.org/10.1111/j.1467-8659.2008.01135.x
  8. F. Pighin, J. Hecker, D. Lischinski, D. Salesin, and R. Szeliski, "Synthesizing realistic facial expressions," in Proceedings of ACM SIGGRAPH 98, 1998, pp. 75-84.
  9. W. Ma, A. Jones, J. Chiang, T. Hawkins, S. Frederiksen, P. Peers, M. Vukovic, M. Ouhyoung, and P. Debevec, "Facial performance synthesis using deformation-driven polynomial displacement maps," ACM Transactions on Graphics, vol. 27, no. 5, p. 121, 2008.
  10. J. R. Tena, F. De la Torre, and I. Matthews, "Interactive region-based linear 3d face models," ACM Transactions on Graphics, vol. 30, no. 4, pp. 76:1-76:10, July 2011.
  11. T. Weise, S. Bouaziz, H. Li, and M. Pauly, "Realtime performance-based facial animation," ACM Transactions on Graphics, vol. 30, no. 4, pp. 77:1-77:10, July 2011.
  12. 박상일, "광학식 동작 포착 장비를 이용한 노이즈에 강건한 얼굴 애니메이션 제작", 한국게임저널," vol. 10, no. 5, pp. 103-114, 2010.
  13. I. Jolliffe, Principal Component Analysis. Springer, 2010.
  14. B. Horn, "Closed-form solution of absolute orientation using unit quaternions," Journal of the Optical Society of America, vol. A, no. 4, pp. 629-642, 1987.