• Title/Summary/Keyword: Facial motion

Search Result 157, Processing Time 0.026 seconds

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

Implementation of Hair Style Recommendation System Based on Big data and Deepfakes (빅데이터와 딥페이크 기반의 헤어스타일 추천 시스템 구현)

  • Tae-Kook Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.3
    • /
    • pp.13-19
    • /
    • 2023
  • In this paper, we investigated the implementation of a hairstyle recommendation system based on big data and deepfake technology. The proposed hairstyle recommendation system recognizes the facial shapes based on the user's photo (image). Facial shapes are classified into oval, round, and square shapes, and hairstyles that suit each facial shape are synthesized using deepfake technology and provided as videos. Hairstyles are recommended based on big data by applying the latest trends and styles that suit the facial shape. With the image segmentation map and the Motion Supervised Co-Part Segmentation algorithm, it is possible to synthesize elements between images belonging to the same category (such as hair, face, etc.). Next, the synthesized image with the hairstyle and a pre-defined video are applied to the Motion Representations for Articulated Animation algorithm to generate a video animation. The proposed system is expected to be used in various aspects of the beauty industry, including virtual fitting and other related areas. In future research, we plan to study the development of a smart mirror that recommends hairstyles and incorporates features such as Internet of Things (IoT) functionality.

A New Facial Composite Flap Model (Panorama Facial Flap) with Sensory and Motor Nerve from Cadaver Study for Facial Transplantation (얼굴이식을 위한 운동과 감각신경을 가진 중하안면피판 모델(파노라마 얼굴피판)에 대한 연구)

  • Kim, Peter Chan Woo;Do, Eon Rok;Kim, Hong Tae
    • Archives of Craniofacial Surgery
    • /
    • v.12 no.2
    • /
    • pp.86-92
    • /
    • 2011
  • Purpose: The purpose of this study was to investigate the possibility that a dynamic facial composite flap with sensory and motor nerves could be made available from donor facial composite tissue. Methods: The faces of 3 human cadavers were dissected. The authors studied the donor faces to assess which facial composite model would be most practicable. A "panorama facial flap" was excised from each facial skeleton with circumferential incision of the oral mucosa, lower conjunctiva and endonasal mucosa. In addition, the authors measured the available length of the arterial and venous pedicles, and the sensory nerves. In the recipient, the authors evaluated the time required to anastomose the vessels and nerve coaptations, anchor stitches for donor flaps, and skin stitches for closure. Results: In the panorama facial flap, the available anastomosing vessels were the facial artery and vein. The sensory nerves that required anastomoses were the infraorbital nerve and inferior alveolar nerve. The motor nerve requiring anstomoses was the facial nerve. The vascular pedicle of the panorama facial flap is the facial artery and vein. The longest length was 78 mm and 48 mm respectively. Sensation of the donor facial composite is supplied by the infraorbital nerve and inferior alveolar nerve. Motion of the facial composite is supplied by the facial nerve. Some branches of the facial nerve can be anastomosed, if necessary. Conclusion: The most practical facial composite flap would be a mid and lower face flap, and we proposed a panorama facial flap that is designed to incorporate the mid and lower facial skin with and the unique tissue of the lip. The panorama facial composite flap could be considered as one of the practicable basic models for facial allotransplantation.

Data-driven Facial Expression Reconstruction for Simultaneous Motion Capture of Body and Face (동작 및 효정 동시 포착을 위한 데이터 기반 표정 복원에 관한 연구)

  • Park, Sang Il
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.3
    • /
    • pp.9-16
    • /
    • 2012
  • In this paper, we present a new method for reconstructing detailed facial expression from roughly captured data with a small number of markers. Because of the difference in the required capture resolution between the full-body capture and the facial expression capture, they hardly have been performed simultaneously. However, for generating natural animation, a simultaneous capture for body and face is essential. For this purpose, we provide a method for capturing the detailed facial expression only with a small number of markers. Our basic idea is to build a database for the facial expressions and apply the principal component analysis for reducing the dimensionality. The dimensionality reduction enables us to estimate the full data from a part of the data. We justify our method by applying it to dynamic scenes to show the viability of the method.

facial Expression Animation Using 3D Face Modelling of Anatomy Base (해부학 기반의 3차원 얼굴 모델링을 이용한 얼굴 표정 애니메이션)

  • 김형균;오무송
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.2
    • /
    • pp.328-333
    • /
    • 2003
  • This paper did to do with 18 muscle pairs that do fetters in anatomy that influence in facial expression change and mix motion of muscle for face facial animation. After set and change mash and make standard model in individual's image, did mapping to mash using individual facial front side and side image to raise truth stuff. Muscle model who become motive power that can do animation used facial expression creation correcting Waters' muscle model. Created deformed face that texture is dressed using these method. Also, 6 facial expression that Ekman proposes did animation.

Automatic Estimation of 2D Facial Muscle Parameter Using Neural Network (신경회로망을 이용한 2D 얼굴근육 파라메터의 자동인식)

  • 김동수;남기환;한준희;배철수;권오흥;나상동
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.1029-1032
    • /
    • 1999
  • Muscle based face image synthesis is one of the most realistic approach to realize life-like agent in computer. Facial muscle model is composed of facial tissue elements and muscles. In this model, forces are calculated effecting facial tissue element by contraction of each muscle strength, so the combination of each muscle parameter decide a specific facial expression. Now each muscle parameter is decided on trial and error procedure comparing the sample photograph and generated image using our Muscle-Editor to generate a specific face image. In this paper, we propose the strategy of automatic estimation of facial muscle parameters from 2D marker movement using neural network. This also 3D motion estimation from 2D point or flow information in captered image under restriction of physics based face model.

  • PDF

Facial Behavior Recognition for Driver's Fatigue Detection (운전자 피로 감지를 위한 얼굴 동작 인식)

  • Park, Ho-Sik;Bae, Cheol-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9C
    • /
    • pp.756-760
    • /
    • 2010
  • This paper is proposed to an novel facial behavior recognition system for driver's fatigue detection. Facial behavior is shown in various facial feature such as head expression, head pose, gaze, wrinkles. But it is very difficult to clearly discriminate a certain behavior by the obtained facial feature. Because, the behavior of a person is complicated and the face representing behavior is vague in providing enough information. The proposed system for facial behavior recognition first performs detection facial feature such as eye tracking, facial feature tracking, furrow detection, head orientation estimation, head motion detection and indicates the obtained feature by AU of FACS. On the basis of the obtained AU, it infers probability each state occur through Bayesian network.

Facial Expression Control of 3D Avatar by Hierarchical Visualization of Motion Data (모션 데이터의 계층적 가시화에 의한 3차원 아바타의 표정 제어)

  • Kim, Sung-Ho;Jung, Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.277-284
    • /
    • 2004
  • This paper presents a facial expression control method of 3D avatar that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically. Our system creates the facial expression spare from about 2,400 captured facial frames. But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 11 clusters from the space of 2,400 facial expressions. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. When the user zooms in (zoom is discrete), it means that the user wants to see mort details. So, the system creates more clusters for the new level of zoom-in. Every time the level of zoom-in increases, the system doubles the number of clusters. The user selects new key frames along the navigation path of the previous level. At the maximum zoom-in, the user completes facial expression control specification. At the maximum, the user can go back to previous level by zooming out, and update the navigation path. We let users use the system to control facial expression of 3D avatar, and evaluate the system based on the results.

Comparative Analysis of Markerless Facial Recognition Technology for 3D Character's Facial Expression Animation -Focusing on the method of Faceware and Faceshift- (3D 캐릭터의 얼굴 표정 애니메이션 마커리스 표정 인식 기술 비교 분석 -페이스웨어와 페이스쉬프트 방식 중심으로-)

  • Kim, Hae-Yoon;Park, Dong-Joo;Lee, Tae-Gu
    • Cartoon and Animation Studies
    • /
    • s.37
    • /
    • pp.221-245
    • /
    • 2014
  • With the success of the world's first 3D computer animated film, "Toy Story" in 1995, industrial development of 3D computer animation gained considerable momentum. Consequently, various 3D animations for TV were produced; in addition, high quality 3D computer animation games became common. To save a large amount of 3D animation production time and cost, technological development has been conducted actively, in accordance with the expansion of industrial demand in this field. Further, compared with the traditional approach of producing animations through hand-drawings, the efficiency of producing 3D computer animations is infinitely greater. In this study, an experiment and a comparative analysis of markerless motion capture systems for facial expression animation has been conducted that aims to improve the efficiency of 3D computer animation production. Faceware system, which is a product of Image Metrics, provides sophisticated production tools despite the complexity of motion capture recognition and application process. Faceshift system, which is a product of same-named Faceshift, though relatively less sophisticated, provides applications for rapid real-time motion recognition. It is hoped that the results of the comparative analysis presented in this paper become baseline data for selecting the appropriate motion capture and key frame animation method for the most efficient production of facial expression animation in accordance with production time and cost, and the degree of sophistication and media in use, when creating animation.

A study of facial nerve grading system (구안와사(口眼喎斜)의 평가방법(評價方法)에 대한 고찰(考察))

  • Kim, Jong-In;Koh, Hyung-Kyun;Kim, Chang-Hwan
    • Journal of Acupuncture Research
    • /
    • v.18 no.2
    • /
    • pp.1-17
    • /
    • 2001
  • Background and Objetive : Lack of uniformity in reporting facial nerve recovery in patients with facial nerve paralysis has been a major disadvantage in comparing treatment modalities. The objective evaluation of facial nerve function is a complex procedure. The House and Brackmann grading system, the Yanagihara grading system has been recommend as a universal standard for assessing the degree of facial nerve palsy. However, clinical studies for treatment of facial palsy have rarely used this universal standard in oriental medicine. That is the reason for analysing this facial nerve grading system. Material and Method : We choose 10 scales reported from 1955 till 1995. These facial nerve grading systems may be classified as Gross system, Regional system and Specific system. Result and Conculsion : The scales of Botmann and Jonkees, May, Peitersen, and House and Brackmann are the gross facial nerve grading systems with which we grossly assess the facial motor dysfunction and the secondary defect. Among these scales, H-B scale is the most widespred The scales of Yanagihara(若杉文吉), Smith, Adour and Swanson, Jassen, FEMA are the regional facial nerve grading system in which we weight, or unweight the facial motor dysfunction and the secondary defect. For example, the scales of Yanagihara(若杉文吉) and Smith are the unweighted regional scale, the scale of Adour and Swanson, Jassen, FEMA are the weighted regional grading system. The scale of Stennert is the Specific facial nerve grading system in which we respectively assess the grade of facial dysfunction at rest, in motion and the secondary defect. For the objective evaluation of the oriental medicine treatment for facial palsy, we must use the universal standard scale, i.e. the H-B scale, the Yanagihara scale.

  • PDF