• Title/Summary/Keyword: 3차원 얼굴 모델링

Search Result 42, Processing Time 0.025 seconds

Data-driven Facial Animation Using Sketch Interface (스케치 인터페이스를 이용한 데이터 기반 얼굴 애니메이션)

  • Ju, Eun-Jung;Ahn, Soh-Min;Lee, Je-Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.13 no.3
    • /
    • pp.11-18
    • /
    • 2007
  • Creating stylistic facial animation is one of the most important problems in character animation. Traditionally facial animation is created manually by animators of captured using motion capture systems. But this process is very difficult and labor-intensive. In this work, we present an intuitive, easy-to-use, sketch-based user interface system that facilitates the process of creating facial animation and key-frame interpolation method using facial capture data. The user of our system is allowed to create expressive speech facial animation easily and rapidly.

  • PDF

An Algorithim for Converting 2D Face Image into 3D Model (얼굴 2D 이미지의 3D 모델 변환 알고리즘)

  • Choi, Tae-Jun;Lee, Hee-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.4
    • /
    • pp.41-48
    • /
    • 2015
  • Recently, the spread of 3D printers has been increasing the demand for 3D models. However, the creation of 3D models should have a trained specialist using specialized softwares. This paper is about an algorithm to produce a 3D model from a single sheet of two-dimensional front face photograph, so that ordinary people can easily create 3D models. The background and the foreground are separated from a photo and predetermined constant number vertices are placed on the seperated foreground 2D image at a same interval. The arranged vertex location are extended in three dimensions by using the gray level of the pixel on the vertex and the characteristics of eyebrows and nose of the nomal human face. The separating method of the foreground and the background uses the edge information of the silhouette. The AdaBoost algorithm using the Haar-like feature is also employed to find the location of the eyes and nose. The 3D models obtained by using this algorithm are good enough to use for 3D printing even though some manual treatment might be required a little bit. The algorithm will be useful for providing 3D contents in conjunction with the spread of 3D printers.

3D Facial Modeling and Expression Synthesis using muscle-based model for MPEG-4 SNHC (MPEG-4 SNHC을 위한 3차원 얼굴 모델링 및 근육 모델을 이용한 표정합성)

  • 김선욱;심연숙;변혜란;정찬섭
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1999.11a
    • /
    • pp.368-372
    • /
    • 1999
  • 새롭게 표준화된 멀티미디어 동영상 파일 포맷인 MPEG-4에는 자연영상과 소리뿐만 아니라 합성된 그래픽과 소리를 포함하고 있다. 특히 화상회의나 가상환경의 아바타를 구성하기 위한 모델링과 에니메이션을 위한 FDP, FAP에 대한 표준안을 포함하고 있다. 본 논문은 MPEG-4에서 정의한 FDP와 FAP를 이용하여 화상회의나 가상환경의 아바타로 자연스럽고 현실감 있게 사용할 수 있는 얼굴 모델 생성을 위해서 보다 정교한 일반모델을 사용하고, 이에 근육 모델을 사용하여 보다 정밀한 표정 생성을 위해서 임의의 위치에 근육을 생성 할 수 있도록 근육 편집기를 작성하여, 표정 에니메이션을 수행할 수 있도록 에니메이션 편집 프로그램을 구현하였다.

  • PDF

Pose Transformation of a Frontal Face Image by Invertible Meshwarp Algorithm (역전가능 메쉬워프 알고리즘에 의한 정면 얼굴 영상의 포즈 변형)

  • 오승택;전병환
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.153-163
    • /
    • 2003
  • In this paper, we propose a new technique of image based rendering(IBR) for the pose transformation of a face by using only a frontal face image and its mesh without a three-dimensional model. To substitute the 3D geometric model, first, we make up a standard mesh set of a certain person for several face sides ; front. left, right, half-left and half-right sides. For the given person, we compose only the frontal mesh of the frontal face image to be transformed. The other mesh is automatically generated based on the standard mesh set. And then, the frontal face image is geometrically transformed to give different view by using Invertible Meshwarp Algorithm, which is improved to tolerate the overlap or inversion of neighbor vertexes in the mesh. The same warping algorithm is used to generate the opening or closing effect of both eyes and a mouth. To evaluate the transformation performance, we capture dynamic images from 10 persons rotating their heads horizontally. And we measure the location error of 14 main features between the corresponding original and transformed facial images. That is, the average difference is calculated between the distances from the center of both eyes to each feature point for the corresponding original and transformed images. As a result, the average error in feature location is about 7.0% of the distance from the center of both eyes to the center of a mouth.

Study of Model Based 3D Facial Modeling for Virtual Reality (가상현실에 적용을 위한 모델에 근거한 3차원 얼굴 모델링에 관한 연구)

  • 한희철;권중장
    • Proceedings of the IEEK Conference
    • /
    • 2000.11c
    • /
    • pp.193-196
    • /
    • 2000
  • In this paper, we present a model based 3d facial modeling method for virtual reality application using only one front of face photography. We extract facial feature using facial photography and modify mesh of the basic 3D model by the facial feature. After this , We use texture mapping for more similarity. By experiment, we know that the modeling technic is useful method for Movie, Virtual Reality Application, Game , Clothing Industry , 3D Video Conference.

  • PDF

A Method of Integrating Scan Data for 3D Face Modeling (3차원 얼굴 모델링을 위한 스캔 데이터의 통합 방법)

  • Yoon, Jin-Sung;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.6
    • /
    • pp.43-57
    • /
    • 2009
  • Integrating 3D data acquired in multiple views is one of the most important techniques in 3D modeling. However, the existing integration methods are sensitive to registration errors and surface scanning noise. In this paper, we propose a integration algorithm using the local surface topology. We first find all boundary vertex pairs satisfying a prescribed geometric condition in the areas between neighboring surfaces, and then separates areas to several regions by using boundary vertex pairs. We next compute best fitting planes suitable to each regions through PCA(Principal Component Analysis). They are used to produce triangles that be inserted into empty areas between neighboring surfaces. Since each regions between neighboring surfaces can be integrated by using local surface topology, a proposed method is robust to registration errors and surface scanning noise. We also propose a method integrating of textures by using parameterization technique. We first transforms integrated surface into initial viewpoints of each surfaces. We then project each textures to transformed integrated surface. They will be then assigned into parameter domain for integrated surface and be integrated according to the seaming lines for surfaces. Experimental results show that the proposed method is efficient to face modeling.

Human-like Fuzzy Lip Synchronization of 3D Facial Model Based on Speech Speed (발화속도를 고려한 3차원 얼굴 모형의 퍼지 모델 기반 립싱크 구현)

  • Park Jong-Ryul;Choi Cheol-Wan;Park Min-Yong
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.416-419
    • /
    • 2006
  • 본 논문에서는 음성 속도를 고려한 새로운 립싱크 방법에 대해서 제안한다. 실험을 통해 구축한 데이터베이스로부터 음성속도와 입모양 및 크기와의 관계를 퍼지 알고리즘을 이용하여 정립하였다. 기존 립싱크 방법은 음성 속도를 고려하지 않기 때문에 말의 속도와 상관없이 일정한 입술의 모양과 크기를 보여준다. 본 논문에서 제안한 방법은 음성 속도와 입술 모양의 관계를 적용하여 보다 인간에 근접한 립싱크의 구현이 가능하다. 또한 퍼지 이론을 사용함으로써 수치적으로 정확하게 표현할 수 없는 애매한 입 크기와 모양의 변화를 모델링 할 수 있다. 이를 증명하기 위해 제안된 립싱크 알고리즘과 기존의 방법을 비교하고 3차원 그래픽 플랫폼을 제작하여 실제 응용 프로그램에 적용한다.

  • PDF

Integrated 3D Skin Color Model for Robust Skin Color Detection of Various Races (강건한 다인종 얼굴 검출을 위한 통합 3D 피부색 모델)

  • Park, Gyeong-Mi;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.5
    • /
    • pp.1-12
    • /
    • 2009
  • The correct detection of skin color is an important preliminary process in fields of face detection and human motion analysis. It is generally performed by three steps: transforming the pixel color to a non-RGB color space, dropping the illuminance component of skin color, and classifying the pixels by the skin color distribution model. Skin detection depends on by various factors such as color space, presence of the illumination, skin modeling method. In this paper we propose a 3d skin color model that can segment pixels with several ethnic skin color from images with various illumination condition and complicated backgrounds. This proposed skin color model are formed with each components(Y, Cb, Cr) which transform pixel color to YCbCr color space. In order to segment the skin color of several ethnic groups together, we first create the skin color model of each ethnic group, and then merge the skin color model using its skin color probability. Further, proposed model makes several steps of skin color areas that can help to classify proper skin color areas using small training data.

Viewpoint interpolation of face images using an ellipsoid model (타원체 MODEL을 사용한 얼굴 영상의 시점합성에 관한 연구)

  • Yoon, Na-Ree;Lee, Byung-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.6C
    • /
    • pp.572-578
    • /
    • 2007
  • To establish eye contact in video teleconferencing, it is necessary to synthesize a front view image by viewpoint interpolation. We can find the viewing direction of a user, and interpolate an image seen from that viewpoint, which will result in a face image observed from the front. There are two categories of previous research: image based method and model based method. The former is simple to calculate, however, it shows limited performance for complex objects. And the latter is robust to noise while it is computationally expensive. We propose to approximate face images as ellipses and match them to build an ellipsoid and then synthesize a new image from a given virtual camera position. We show that it is simple and robust from various experiments.

Korean Talking Animation for User Interface Agent Environment (사용자 인터페이스 에이젼트 환경을 위한 국어 발음 애니메이션)

  • Choe, Seung-Keol;Lee, Mi-Seung;Kim, Woong-Soon
    • Annual Conference on Human and Language Technology
    • /
    • 1996.10a
    • /
    • pp.284-297
    • /
    • 1996
  • 사용자가 컴퓨터와 자연스럽고 인간적으로 대화할 수 있고, 사람의 요구에 지능적인 해답을 능동적으로 제시할 수 있는 사용자 인터페이스 에이전트가 활발히 연구되고 있다. 음성, 펜, 제스쳐인식 등을 비롯한 다양한 방법을 통하여 사람의 의사전달방식을 컴퓨터의 입력수단으로 구현하여 사용자 편의성을 도모하고 있다. 본 논문에서는 컴퓨터를 블랙박스로 하고, 표면적으로 지능형 3차원 그래픽 얼굴 에이전트와 사용자가 의사소통을 하는 사용자 인터페이스를 대상으로 하였다. 컴퓨터가 단순문제 해결을 위한 도구에서 많은 정보를 다양한 매체를 통해 제공하는 보조자의 역할을 수행하게 되었기 때문에 위의 방법은 보다 적극적인 방법이라 할 수 있다. 이를 위한 기반 기술로써 국어를 발음하는 얼굴 애니메이션을 연구하였다. 발음을 표현하기 위한 데이터로써 디지털 카메라를 사용하여 입술 운동의 특징점의 위치를 조사하였고, 모델링 시스템을 개발하여 데이터를 입력하였다. 적은 데이터로도 복잡한 자유곡면을 표현할 수 있는 B-Spline곡면을 기본데이터로 사용하였기 때문에 애니메이션을 위한 데이터의 양 또한 줄일 수 있었다. 그리고 국어음소의 발음시간 수열에 대한 입술모양의 변화를 조사하여 발음소리와 입술 움직임을 동기화 시킨 발음 애니메이션을 구현하였다.

  • PDF