• Title/Summary/Keyword: 3-D facial model

Search Result 135, Processing Time 0.027 seconds

Designing and Implementing 3D Virtual Face Aesthetic Surgery System (3D 가상 얼굴 성형 제작 시스템 설계 및 구현)

  • Lee, Cheol-Woong;Kim, Il-Min;Cho, Sae-Hong
    • Journal of Digital Contents Society
    • /
    • v.9 no.4
    • /
    • pp.751-758
    • /
    • 2008
  • The purpose of this study is to implement 3D Face Model, which resembles a user, using 3D Graphic techniques. The implemented 3D Face model is used to further study and implement 3D Facial Aesthetic Surgery System, that can be used to increase the satisfaction rate of patient by comparing before and after facial aesthetic surgery. For designing and implementing 3D Facial Aesthetic Surgery System, 3D Modeling, Texture Mapping for skin, Database system for facial data are studied and implemented independently. The Detailed Adjustment System is, also, implemented for reflecting the minute description of face. The implemented 3D Facial Aesthetic Surgery System for this paper shows more accuacy, convenience, and satisfaction in compare with the existing system.

  • PDF

Reconstructing 3-D Facial Shape Based on SR Imagine

  • Hong, Yu-Jin;Kim, Jaewon;Kim, Ig-Jae
    • Journal of International Society for Simulation Surgery
    • /
    • v.1 no.2
    • /
    • pp.57-61
    • /
    • 2014
  • We present a robust 3D facial reconstruction method using a single image generated by face-specific super resolution technique. Based on the several consecutive frames with low resolution, we generate a single high resolution image and a three dimensional facial model based on it. To do this, we apply PME method to compute patch similarities for SR after two-phase warping according to facial attributes. Based on the SRI, we extract facial features automatically and reconstruct 3D facial model with basis which selected adaptively according to facial statistical data less than a few seconds. Thereby, we can provide the facial image of various points of view which cannot be given by a single point of view of a camera.

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

A Facial Animation System Using 3D Scanned Data (3D 스캔 데이터를 이용한 얼굴 애니메이션 시스템)

  • Gu, Bon-Gwan;Jung, Chul-Hee;Lee, Jae-Yun;Cho, Sun-Young;Lee, Myeong-Won
    • The KIPS Transactions:PartA
    • /
    • v.17A no.6
    • /
    • pp.281-288
    • /
    • 2010
  • In this paper, we describe the development of a system for generating a 3-dimensional human face using 3D scanned facial data and photo images, and morphing animation. The system comprises a facial feature input tool, a 3-dimensional texture mapping interface, and a 3-dimensional facial morphing interface. The facial feature input tool supports texture mapping and morphing animation - facial morphing areas between two facial models are defined by inputting facial feature points interactively. The texture mapping is done first by means of three photo images - a front and two side images - of a face model. The morphing interface allows for the generation of a morphing animation between corresponding areas of two facial models after texture mapping. This system allows users to interactively generate morphing animations between two facial models, without programming, using 3D scanned facial data and photo images.

Web-based 3D Face Modeling System (웹기반 3차원 얼굴 모델링 시스템)

  • 김응곤;송승헌
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.3
    • /
    • pp.427-433
    • /
    • 2001
  • This paper proposes a web-based 3 dimensional face modeling system that makes a realistic facial model efficiently without any 30 scanner or camera that uses in the traditional methods. Without expensive image-input equipments, we can easily create 3B models only using front and side images. The system is available to make 3D facial models as we connect to the facial modeling server on the WWW which is independent from specific platforms and softwares. This system will be implemented using Java 3D API, which includes the functions and conveniences of developed graphic libraries. It is a Client/server architecture which consists of user connection module and 3D facial model creating module. Clients connect with the facial modeling server, input two facial photographic images, detects the feature points, and then create a 3D facial model modifying generic facial model with the points according to the procedures using only the web browser.

  • PDF

A 3D facial Emotion Editor Using a 2D Comic Model (2D 코믹 모델을 이용한 3D 얼굴 표정 에디터)

  • 이용후;김상운;청목유직
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.226-229
    • /
    • 2000
  • A 2D comic model, a comic-style line drawing model having only eyebrows, eyes, nose and mouth, is much easier to generate facial expressions with small number of points than that of 3D model. In this paper we propose a 3D emotional editor using a 2D comic model, where emotional expressions are represented by using action units(AU) of FACS. Experiments show a possibility that the proposed method could be used efficiently for intelligent sign-language communications between avatars of different languages in the Internet cyberspace.

  • PDF

3D Facial Landmark Tracking and Facial Expression Recognition

  • Medioni, Gerard;Choi, Jongmoo;Labeau, Matthieu;Leksut, Jatuporn Toy;Meng, Lingchao
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.3
    • /
    • pp.207-215
    • /
    • 2013
  • In this paper, we address the challenging computer vision problem of obtaining a reliable facial expression analysis from a naturally interacting person. We propose a system that combines a 3D generic face model, 3D head tracking, and 2D tracker to track facial landmarks and recognize expressions. First, we extract facial landmarks from a neutral frontal face, and then we deform a 3D generic face to fit the input face. Next, we use our real-time 3D head tracking module to track a person's head in 3D and predict facial landmark positions in 2D using the projection from the updated 3D face model. Finally, we use tracked 2D landmarks to update the 3D landmarks. This integrated tracking loop enables efficient tracking of the non-rigid parts of a face in the presence of large 3D head motion. We conducted experiments for facial expression recognition using both framebased and sequence-based approaches. Our method provides a 75.9% recognition rate in 8 subjects with 7 key expressions. Our approach provides a considerable step forward toward new applications including human-computer interactions, behavioral science, robotics, and game applications.

3D Facial Modeling and Synthesis System for Realistic Facial Expression (자연스러운 표정 합성을 위한 3차원 얼굴 모델링 및 합성 시스템)

  • 심연숙;김선욱;한재현;변혜란;정창섭
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.2
    • /
    • pp.1-10
    • /
    • 2000
  • Realistic facial animation research field which communicates with human and computer using face has increased recently. The human face is the part of the body we use to recognize individuals and the important communication channel that understand the inner states like emotion. To provide the intelligent interface. computer facial animation looks like human in talking and expressing himself. Facial modeling and animation research is focused on realistic facial animation recently. In this article, we suggest the method of facial modeling and animation for realistic facial synthesis. We can make a 3D facial model for arbitrary face by using generic facial model. For more correct and real face, we make the Korean Generic Facial Model. We can also manipulate facial synthesis based on the physical characteristics of real facial muscle and skin. Many application will be developed such as teleconferencing, education, movies etc.

  • PDF

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

A Study on Creation of 3D Facial Model Using Fitting by Edge Detection based on Fuzzy Logic (퍼지논리의 에지검출에 의한 정합을 이용한 3차원 얼굴모델 생성)

  • Lee, Hye-Jung;Kim, Ju-Ri;Joung, Suck-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.12
    • /
    • pp.2681-2690
    • /
    • 2010
  • This paper proposes 3D facial modeling system without using 3D scanner and camera or expensive software. This system enables efficient 3D facial modeling to cost reduction and effort saving for natural facial modeling. It detects edges of component of face using edge detection based on fuzzy logic from any 2D image of front face. It was mapped fitting position with 3D standard face model by detected edge more correctly. Also this system generates 3D face model more easily through floating and flexible control and texture mapping after fitting that connection of control point on detected edge from 2D image and mesh of 3D standard face model.