• Title/Summary/Keyword: 3D Expression Method

Search Result 299, Processing Time 0.031 seconds

Realistic 3-dimensional using computer graphics Expression of Human illustrations (컴퓨터그래픽스를 이용한 사실적인 3D 인물 일러스트레이션의 표현)

  • Kim, Hoon
    • Archives of design research
    • /
    • v.19 no.1 s.63
    • /
    • pp.79-88
    • /
    • 2006
  • A human face figure is a visual symbol of identity. Each different face per person is a critical information differentiating each person from others and it directly relates to individual identity. When we look back human history, historical change of recognition for a face led to the change of expression and communication media and it in turn caused many changes in expressing a face. However, there has not been no time period when people pay attention to a face more than this time. Technically, the advent of computer graphics opened new turning point in expressing human face figure. Especially, a visual image which can be produced, saved, and transferred in digital has no limitation in time and space, and its importance in communication is getting higher and higher. Among those visual image information, a face image in digital is getting more applications. Therefore, 3d (3-dimensional) expression of a face using computer graphics can be easily produced without any professional techniques, just like assembling puzzle parts composed of the shape of each part ands texture map, etc. This study presents a method with which a general visual designer can effectively express 3d type face by studying each producing step of 3d face expression and by visualizing case study based on the above-mentioned study result.

  • PDF

Realtime Facial Expression Control of 3D Avatar by PCA Projection of Motion Data (모션 데이터의 PCA투영에 의한 3차원 아바타의 실시간 표정 제어)

  • Kim Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1478-1484
    • /
    • 2004
  • This paper presents a method that controls facial expression in realtime of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. The space of expression is created from about 2400 frames of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. Facial expression of 3D avatar is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the Principal Component Analysis(PCA) projection. To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. This paper evaluates the results.

  • PDF

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

A Study of the Method for Building up 3D Right Objects

  • Lee, Woo-Jin;Suh, Yong-Cheol
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.6
    • /
    • pp.527-536
    • /
    • 2015
  • Recently, the demand for three-dimensional spatial information has continuously been increasing, and especially, studies of indoor/outdoor spatial information or data construction have actively been conducted. However, utilization of spatial information does not universally spread to the private sector, but it is mostly used for the government offices. Thus, this study deals with the creation of three-dimensional right objects and the technique of expression to further vitalize the private sector, three-dimensional right objects, aiming to create and express three-dimensional right spaces in a particular system or open platform more conveniently. Unlike a plane text or apartment building used in existing maps was iconified and displayed simply, this study proposes a method of extracting data from the outer border of the building by the relevant level based on the existing structured three-dimensional building, a method of providing two-dimensional right spatial objects in XML, and expressing them as three-dimensional right objects efficiently. In addition, this study will discuss a method of creating right objects in a way in which an owner who was provided with a cross section of a building involves the direct detailed right objects in additional production or reproduction to utilize three-dimensional data (right objects) produced through this study.

A Study on the Application of 3D Digital Technology to Producing Cyber Fashion Gallery (3D 디지털 기술을 활용한 패션 갤러리 제작에 관한 연구)

  • Kim, Ji-Eon
    • The Research Journal of the Costume Culture
    • /
    • v.15 no.3 s.68
    • /
    • pp.446-460
    • /
    • 2007
  • This study shows that digital technology is adapted practical method in fashion design process and virtual simulation and cyber fashion gallery based on virtual reality are researched. This study is proposed the 3D fashion design simulation in the virtual space used on 3D studio max, poser, photoshop program according to fashion design process. The main design concept is "temporary bridge" from rainbow. This study is supposed six fashion design in accordance with three sub-theme under main concept by changing color and texture used on 3D simulation. The results of this study are as follows: 1. This study produced Cyber Fashion Gallery in virtual space to the form of CD Rom title and web title by Macromedia Director 8.5, Macromedia Flash, Sound Forge. And it is enlarged the field of expression in aspect of Fashion Exhibition, beyond restriction of time and space. 2. Clothes modelling tools is able to easily adapt to various textiles and patterns in 3D dynamic virtual mannequin before making clothes. Digital technology is able to express image changed color and texture, especially new material, multi-finishing material and brilliant material and so on. So this study is able to develop tools for study of fashion coordination. 3. Cyber Fashion Gallery consists of gallery, story, painting, symbolism, example, image, quit. This study is enlarged the range of clothing expression by digital technology and open to possibility customized-manufacture.

  • PDF

Comparative Analysis of Markerless Facial Recognition Technology for 3D Character's Facial Expression Animation -Focusing on the method of Faceware and Faceshift- (3D 캐릭터의 얼굴 표정 애니메이션 마커리스 표정 인식 기술 비교 분석 -페이스웨어와 페이스쉬프트 방식 중심으로-)

  • Kim, Hae-Yoon;Park, Dong-Joo;Lee, Tae-Gu
    • Cartoon and Animation Studies
    • /
    • s.37
    • /
    • pp.221-245
    • /
    • 2014
  • With the success of the world's first 3D computer animated film, "Toy Story" in 1995, industrial development of 3D computer animation gained considerable momentum. Consequently, various 3D animations for TV were produced; in addition, high quality 3D computer animation games became common. To save a large amount of 3D animation production time and cost, technological development has been conducted actively, in accordance with the expansion of industrial demand in this field. Further, compared with the traditional approach of producing animations through hand-drawings, the efficiency of producing 3D computer animations is infinitely greater. In this study, an experiment and a comparative analysis of markerless motion capture systems for facial expression animation has been conducted that aims to improve the efficiency of 3D computer animation production. Faceware system, which is a product of Image Metrics, provides sophisticated production tools despite the complexity of motion capture recognition and application process. Faceshift system, which is a product of same-named Faceshift, though relatively less sophisticated, provides applications for rapid real-time motion recognition. It is hoped that the results of the comparative analysis presented in this paper become baseline data for selecting the appropriate motion capture and key frame animation method for the most efficient production of facial expression animation in accordance with production time and cost, and the degree of sophistication and media in use, when creating animation.

Hox Genes are Differentially Expressed during Mouse Placentation

  • Park, Sung-Joo;Lee, Ji-Yeon;Ma, Ji-Hyun;Kim, Helena Hye-Soo;Kim, Myoung-Hee
    • Biomedical Science Letters
    • /
    • v.18 no.2
    • /
    • pp.169-174
    • /
    • 2012
  • The placenta is an extraembryonic tissue that is formed between mother and fetus and mediates delivery of nutrients and oxygen from the mother to the fetus. Because of its essential role in sustaining the growth of the fetus during gestation, defects in its development and function frequently result in fetal growth retardation or intrauterine death, depending on its severity. Vertebrate Hox genes are well known transcription factors that are essential for the proper organization of the body plan during embryogenesis. However, certain Hox genes have been known to be expressed in placenta, implying that Hox genes not only play a crucial role during embryonic patterning but also play an important role in placental development. So far, there has been no report that shows the expression pattern of the whole Hox genes during placentation. In this study, therefore, we investigated the Hox gene expression pattern in mouse placenta, from day 10.5 to 18.5 of gestation using real-time RT-PCR method. In general, the 5' posterior Hox genes were expressed more in the developing placenta compared to the 3' Hox genes. Statistical analysis revealed that the expression of 15 Hox genes (Hoxa9, -a11, -a13/ -b8, -b9/ -c6, -c9, -c13/ -d1, -d3, -d8, -d9, -d10, -d11, -d12) were significantly changed in the course of gestation. The majority of these genes showed highest expression at gestational day 10.5, suggesting their possible role in the early stage during placental development.

3-D Facial Animation on the PDA via Automatic Facial Expression Recognition (얼굴 표정의 자동 인식을 통한 PDA 상에서의 3차원 얼굴 애니메이션)

  • Lee Don-Soo;Choi Soo-Mi;Kim Hae-Hwang;Kim Yong-Guk
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.795-802
    • /
    • 2005
  • In this paper, we present a facial expression recognition-synthesis system that recognizes 7 basic emotion information automatically and renders face with non-photorelistic style in PDA For the recognition of the facial expressions, first we need to detect the face area within the image acquired from the camera. Then, a normalization procedure is applied to it for geometrical and illumination corrections. To classify a facial expression, we have found that when Gabor wavelets is combined with enhanced Fisher model the best result comes out. In our case, the out put is the 7 emotional weighting. Such weighting information transmitted to the PDA via a mobile network, is used for non-photorealistic facial expression animation. To render a 3-D avatar which has unique facial character, we adopted the cartoon-like shading method. We found that facial expression animation using emotional curves is more effective in expressing the timing of an expression comparing to the linear interpolation method.

Effective Internal Pattern Expression Using 3D Vector Data (3D 벡터 데이터를 이용한 효과적인 내부문양 표현)

  • Park, Sung-Jun;Cho, Jin-Soo;WhangBo, Taeg-Keun
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.645-646
    • /
    • 2008
  • Silhouette extraction is widely used in many computer graphics applications. In this paper, we proposed a method for extracting 3D silhouette and internal pattern from 3D vector data. To do this, we first make an edge-list, secondly define the silhouette, and finally remove hidden lines. After getting the silhouette, we extract internal pattern using adjacent edge's dihedral. The proposed method not only effectively improves the performance of extracting 3D silhouette and internal pattern from 3D vector data but also reduces the computational complexity.

  • PDF

Registration System of 3D Footwear data by Foot Movements (발의 움직임 추적에 의한 3차원 신발모델 정합 시스템)

  • Jung, Da-Un;Seo, Yung-Ho;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.24-34
    • /
    • 2007
  • Application systems that easy to access a information have been developed by IT growth and a human life variation. In this paper, we propose a application system to register a 3D footwear model using a monocular camera. In General, a human motion analysis research to body movement. However, this system research a new method to use a foot movement. This paper present a system process and show experiment results. For projection to 2D foot plane from 3D shoe model data, we construct processes that a foot tracking, a projection expression and pose estimation process. This system divide from a 2D image analysis and a 3D pose estimation. First, for a foot tracking, we propose a method that find fixing point by a foot characteristic, and propose a geometric expression to relate 2D coordinate and 3D coordinate to use a monocular camera without a camera calibration. We make a application system, and measure distance error. Then, we confirmed a registration very well.