• Title/Summary/Keyword: 3D 얼굴 애니메이션

Search Result 69, Processing Time 0.02 seconds

A Facial Animation System Using 3D Scanned Data (3D 스캔 데이터를 이용한 얼굴 애니메이션 시스템)

  • Gu, Bon-Gwan;Jung, Chul-Hee;Lee, Jae-Yun;Cho, Sun-Young;Lee, Myeong-Won
    • The KIPS Transactions:PartA
    • /
    • v.17A no.6
    • /
    • pp.281-288
    • /
    • 2010
  • In this paper, we describe the development of a system for generating a 3-dimensional human face using 3D scanned facial data and photo images, and morphing animation. The system comprises a facial feature input tool, a 3-dimensional texture mapping interface, and a 3-dimensional facial morphing interface. The facial feature input tool supports texture mapping and morphing animation - facial morphing areas between two facial models are defined by inputting facial feature points interactively. The texture mapping is done first by means of three photo images - a front and two side images - of a face model. The morphing interface allows for the generation of a morphing animation between corresponding areas of two facial models after texture mapping. This system allows users to interactively generate morphing animations between two facial models, without programming, using 3D scanned facial data and photo images.

Estimation of 3D Rotation Information of Animation Character Face (애니메이션 캐릭터 얼굴의 3차원 회전정보 측정)

  • Jang, Seok-Woo;Weon, Sun-Hee;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.8
    • /
    • pp.49-56
    • /
    • 2011
  • Recently, animation contents has become extensively available along with the development of cultural industry. In this paper, we propose a method to analyze a face of animation character and extract 3D rotational information of the face. The suggested method first generates a dominant color model of a face by learning the face image of animation character. Our system then detects the face and its components with the model, and establishes two coordinate systems: base coordinate system and target coordinate system. Our system estimates three dimensional rotational information of the animation character face using the geometric relationship of the two coordinate systems. Finally, in order to visually represent the extracted 3D information, a 3D face model in which the rotation information is reflected is displayed. In experiments, we show that our method can extract 3D rotation information of a character face reasonably.

Synthesizing Faces of Animation Characters Using a 3D Model (3차원 모델을 사용한 애니메이션 캐릭터 얼굴의 합성)

  • Jang, Seok-Woo;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.8
    • /
    • pp.31-40
    • /
    • 2012
  • In this paper, we propose a method of synthesizing faces of a user and an animation character using a 3D face model. The suggested method first receives two orthogonal 2D face images and extracts major features of the face through the template snake. It then generates a user-customized 3D face model by adjusting a generalized face model using the extracted facial features and by mapping texture maps obtained from two input images to the 3D face model. Finally, it generates a user-customized animation character by synthesizing the generated 3D model to an animation character reflecting the position, size, facial expressions, and rotational information of the character. Experimental results show some results to verify the performance of the suggested algorithm. We expect that our method will be useful to various applications such as games and animation movies.

3D Face Modeling Using Mesh Simplification (메쉬 간략화를 이용한 3차원 얼굴모델링)

  • 이현철;허기택
    • The Journal of the Korea Contents Association
    • /
    • v.3 no.4
    • /
    • pp.69-76
    • /
    • 2003
  • Recently, in computer graphics, researches on 3D animations have been very active. one of the important research areas in 3D animation is animation of human being. The creation and animation of 3D facial models has depended on animators' manual work frame by frame. Thus, it needs many efforts and time as well as various hardwares and softwares. In this paper, we implements a way to generation 3D human face model easily and quickly just with the front face images. Then, we suggests a methodology for mesh data simplification of 3D generic model.

  • PDF

Model-Independent Facial Animation Tool (모델 독립적 얼굴 표정 애니메이션 도구)

  • 이지형;김상원;박찬종
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1999.11a
    • /
    • pp.193-196
    • /
    • 1999
  • 컴퓨터 그래픽스에서 인간의 얼굴 표정을 생성하는 것은 고전적인 주제중의 하나이다. 따라서 관련된 많은 연구가 이루어져 왔지만 대부분의 연구들은 특정 모델을 이용한 얼굴 표정 애니메이션을 제공하였다. 이는 얼굴 표정에는 얼굴 근육 정보 및 부수적인 정보가 필요하고 이러한 정보는 3D 얼굴 모델에 종속되기 때문이다. 본 논문에서는 일반적인 3D 얼굴 모델에 근육을 설정하고 기타 정보를 편집하여, 다양한 3D 얼굴모델에서 표정 애니메이션이 가능한 도구를 제안한다.

  • PDF

Realistics Facial Expression Animation and 3D Face Synthesis (실감 있는 얼굴 표정 애니메이션 및 3차원 얼굴 합성)

  • 한태우;이주호;양현승
    • Science of Emotion and Sensibility
    • /
    • v.1 no.1
    • /
    • pp.25-31
    • /
    • 1998
  • 컴퓨터 하드웨어 기술과 멀티미디어 기술의 발달로 멀티미디어 입출력 장치를 이용한 고급 인터메이스의 필요성이 대두되었다. 친근감 있는 사용자 인터페이스를 제공하기 위해 실감 있는 얼굴 애니메이션에 대한 요구가 증대되고 있다. 본 논문에서는 사람의 내적 상태를 잘 표현하는 얼굴의 표정을 3차원 모델을 이용하여 애니메이션을 수행한다. 애니메이션에 실재감을 더하기 위해 실제 얼굴 영상을 사용하여 3차원의 얼굴 모델을 변형하고, 여러 방향에서 얻은 얼굴 영상을 이용하여 텍스터 매핑을 한다. 변형된 3차원 모델을 이용하여 얼굴 표정을 애니메이션 하기 위해서 해부학에 기반한 Waters의 근육 모델을 수정하여 사용한다. 그리고, Ekman이 제안한 대표적인 6가지 표정들을 합성한다.

  • PDF

Automatic Anticipation Generation for 3D Facial Animation (3차원 얼굴 표정 애니메이션을 위한 기대효과의 자동 생성)

  • Choi Jung-Ju;Kim Dong-Sun;Lee In-Kwon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.39-48
    • /
    • 2005
  • According to traditional 2D animation techniques, anticipation makes an animation much convincing and expressive. We present an automatic method for inserting anticipation effects to an existing facial animation. Our approach assumes that an anticipatory facial expression can be found within an existing facial animation if it is long enough. Vertices of the face model are classified into a set of components using principal components analysis directly from a given hey-framed and/or motion -captured facial animation data. The vortices in a single component will have similar directions of motion in the animation. For each component, the animation is examined to find an anticipation effect for the given facial expression. One of those anticipation effects is selected as the best anticipation effect, which preserves the topology of the face model. The best anticipation effect is automatically blended with the original facial animation while preserving the continuity and the entire duration of the animation. We show experimental results for given motion-captured and key-framed facial animations. This paper deals with a part of broad subject an application of the principles of traditional 2D animation techniques to 3D animation. We show how to incorporate anticipation into 3D facial animation. Animators can produce 3D facial animation with anticipation simply by selecting the facial expression in the animation.

Extraction and Implementation of MPEG-4 Facial Animation Parameter for Web Application (웹 응용을 위한 MPEC-4 얼굴 애니메이션 파라미터 추출 및 구현)

  • 박경숙;허영남;김응곤
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1310-1318
    • /
    • 2002
  • In this study, we developed a 3D facial modeler and animator that will not use the existing method by 3D scanner or camera. Without expensive image-input equipments, we can easily create 3D models only using front and side images. The system is available to animate 3D facial models as we connect to animation server on the WWW which is independent from specific platforms and softwares. It was implemented using Java 3D API. The facial modeler detects MPEG-4 FDP(Facial Definition Parameter) feature points from 2D input images, creates 3D facial model modifying generic facial model with the points. The animator animates and renders the 3D facial model according to MPEG-4 FAP(Facial Animation Parameter). This system can be used for generating an avatar on WWW.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

3D Face Modeling based on Statistical Model for Animation (애니메이션을 위한 통계적 모델에 기반을 둔 3D 얼굴모델링)

  • Oh, Du-Sik;Kim, Jae-Min;Cho, Seoung-Won;Chung, Sun-Tae
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2008.04a
    • /
    • pp.435-438
    • /
    • 2008
  • 본 논문에서는 애니메이션을 위해서 얼굴의 특징표현(Action Units)의 조합하는 방법으로 얼굴 모델링을 하기 위한 3D대응점(3D dense correspondence)을 찾는 방법을 제시한다. AUs는 표정, 감정, 발음을 나타내는 얼굴의 특징표현으로 통계적 방법인 PCA (Principle Component Analysis)를 이용하여 만들 수 있다. 이를 위해서는 우선 3D 모델상의 대응점을 찾는 것이 필수이다. 2D에서 얼굴의 주요 특징 점은 다양한 알고리즘을 이용하여 찾을 수 있지만 그것만으로 3D상의 얼굴 모델을 표현하기에는 적합하지 않다. 본 논문에서는 3D 얼굴 모델의 대응점을 찾기 위해 원기둥 좌표계 (Cylinderical Coordinates System)을 이용하여 3D 모델을 2D로 투사(Projection)시켜서 만든 2D 이미지간의 워핑(Warping) 을 통한 대응점을 찾아 역으로 3D 모델간의 대응점을 찾는다. 이것은 3D 모델 자체를 변환하는 것보다 적은 연산량으로 계산할 수 있고 본래 형상의 변형이 없다는 장점을 가지고 있다.

  • PDF