• Title/Summary/Keyword: Motion Capture Animation

Search Result 123, Processing Time 0.021 seconds

Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding (LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.117-124
    • /
    • 2007
  • This paper describes methodology that enables animators to create the facial expression animations and to control the facial expressions in real-time by reusing motion capture datas. In order to achieve this, we fix a facial expression state expression method to express facial states based on facial motion data. In addition, by distributing facial expressions into intuitive space using LLE algorithm, it is possible to create the animations or to control the expressions in real-time from facial expression space using user interface. In this paper, approximately 2400 facial expression frames are used to generate facial expression space. In addition, by navigating facial expression space projected on the 2-dimensional plane, it is possible to create the animations or to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In order to distribute approximately 2400 facial expression data into intuitional space, there is need to represents the state of each expressions from facial expression frames. In order to achieve this, the distance matrix that presents the distances between pairs of feature points on the faces, is used. In order to distribute this datas, LLE algorithm is used for visualization in 2-dimensional plane. Animators are told to control facial expressions or to create animations when using the user interface of this system. This paper evaluates the results of the experiment.

Facial Expression Control of 3D Avatar by Hierarchical Visualization of Motion Data (모션 데이터의 계층적 가시화에 의한 3차원 아바타의 표정 제어)

  • Kim, Sung-Ho;Jung, Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.277-284
    • /
    • 2004
  • This paper presents a facial expression control method of 3D avatar that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically. Our system creates the facial expression spare from about 2,400 captured facial frames. But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 11 clusters from the space of 2,400 facial expressions. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. When the user zooms in (zoom is discrete), it means that the user wants to see mort details. So, the system creates more clusters for the new level of zoom-in. Every time the level of zoom-in increases, the system doubles the number of clusters. The user selects new key frames along the navigation path of the previous level. At the maximum zoom-in, the user completes facial expression control specification. At the maximum, the user can go back to previous level by zooming out, and update the navigation path. We let users use the system to control facial expression of 3D avatar, and evaluate the system based on the results.

Inductive Inverse Kinematics Algorithm for the Natural Posture Control (자연스러운 자세 제어를 위한 귀납적 역운동학 알고리즘)

  • Lee, Bum-Ro;Chung, Chin-Hyun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.367-375
    • /
    • 2002
  • Inverse kinematics is a very useful method for control]ing the posture of an articulated body. In most inverse kinematics processes, the major matter of concern is not the posture of an articulated body itself but the position and direction of the end effector. In some applications such as 3D character animations, however, it is more important to generate an overall natural posture for the character rather than place the end effector in the exact position. Indeed, when an animator wants to modify the posture of a human-like 3D character with many physical constraints, he has to undergo considerable trial-and-error to generate a realistic posture for the character. In this paper, the Inductive Inverse Kinematics(IIK) algorithm using a Uniform Posture Map(UPM) is proposed to control the posture of a human-like 3D character. The proposed algorithm quantizes human behaviors without distortion to generate a UPM, and then generates a natural posture by searching the UPM. If necessary, the resulting posture could be compensated with a traditional Cyclic Coordinate Descent (CCD). The proposed method could be applied to produce 3D-character animations based on the key frame method, 3D games and virtual reality.