• Title/Summary/Keyword: Facial Model

Search Result 519, Processing Time 0.028 seconds

Real-time Markerless Facial Motion Capture of Personalized 3D Real Human Research

  • Hou, Zheng-Dong;Kim, Ki-Hong;Lee, David-Junesok;Zhang, Gao-He
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.129-135
    • /
    • 2022
  • Real human digital models appear more and more frequently in VR/AR application scenarios, in which real-time markerless face capture animation of personalized virtual human faces is an important research topic. The traditional way to achieve personalized real human facial animation requires multiple mature animation staff, and in practice, the complex process and difficult technology may bring obstacles to inexperienced users. This paper proposes a new process to solve this kind of work, which has the advantages of low cost and less time than the traditional production method. For the personalized real human face model obtained by 3D reconstruction technology, first, use R3ds Wrap to topology the model, then use Avatary to make 52 Blend-Shape model files suitable for AR-Kit, and finally realize real-time markerless face capture 3D real human on the UE4 platform facial motion capture, this study makes rational use of the advantages of software and proposes a more efficient workflow for real-time markerless facial motion capture of personalized 3D real human models, The process ideas proposed in this paper can be helpful for other scholars who study this kind of work.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Korean Facial Expression Emotion Recognition based on Image Meta Information (이미지 메타 정보 기반 한국인 표정 감정 인식)

  • Hyeong Ju Moon;Myung Jin Lim;Eun Hee Kim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.13 no.3
    • /
    • pp.9-17
    • /
    • 2024
  • Due to the recent pandemic and the development of ICT technology, the use of non-face-to-face and unmanned systems is expanding, and it is very important to understand emotions in communication in non-face-to-face situations. As emotion recognition methods for various facial expressions are required to understand emotions, artificial intelligence-based research is being conducted to improve facial expression emotion recognition in image data. However, existing research on facial expression emotion recognition requires high computing power and a lot of learning time because it utilizes a large amount of data to improve accuracy. To improve these limitations, this paper proposes a method of recognizing facial expressions using age and gender, which are image meta information, as a method of recognizing facial expressions with even a small amount of data. For facial expression emotion recognition, a face was detected using the Yolo Face model from the original image data, and age and gender were classified through the VGG model based on image meta information, and then seven emotions were recognized using the EfficientNet model. The accuracy of the proposed data classification learning model was higher as a result of comparing the meta-information-based data classification model with the model trained with all data.

A 3D facial Emotion Editor Using a 2D Comic Model (2D 코믹 모델을 이용한 3D 얼굴 표정 에디터)

  • 이용후;김상운;청목유직
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.226-229
    • /
    • 2000
  • A 2D comic model, a comic-style line drawing model having only eyebrows, eyes, nose and mouth, is much easier to generate facial expressions with small number of points than that of 3D model. In this paper we propose a 3D emotional editor using a 2D comic model, where emotional expressions are represented by using action units(AU) of FACS. Experiments show a possibility that the proposed method could be used efficiently for intelligent sign-language communications between avatars of different languages in the Internet cyberspace.

  • PDF

Designing and Implementing 3D Virtual Face Aesthetic Surgery System (3D 가상 얼굴 성형 제작 시스템 설계 및 구현)

  • Lee, Cheol-Woong;Kim, Il-Min;Cho, Sae-Hong
    • Journal of Digital Contents Society
    • /
    • v.9 no.4
    • /
    • pp.751-758
    • /
    • 2008
  • The purpose of this study is to implement 3D Face Model, which resembles a user, using 3D Graphic techniques. The implemented 3D Face model is used to further study and implement 3D Facial Aesthetic Surgery System, that can be used to increase the satisfaction rate of patient by comparing before and after facial aesthetic surgery. For designing and implementing 3D Facial Aesthetic Surgery System, 3D Modeling, Texture Mapping for skin, Database system for facial data are studied and implemented independently. The Detailed Adjustment System is, also, implemented for reflecting the minute description of face. The implemented 3D Facial Aesthetic Surgery System for this paper shows more accuacy, convenience, and satisfaction in compare with the existing system.

  • PDF

Monosyllable Speech Recognition through Facial Movement Analysis (안면 움직임 분석을 통한 단음절 음성인식)

  • Kang, Dong-Won;Seo, Jeong-Woo;Choi, Jin-Seung;Choi, Jae-Bong;Tack, Gye-Rae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.6
    • /
    • pp.813-819
    • /
    • 2014
  • The purpose of this study was to extract accurate parameters of facial movement features using 3-D motion capture system in speech recognition technology through lip-reading. Instead of using the features obtained through traditional camera image, the 3-D motion system was used to obtain quantitative data for actual facial movements, and to analyze 11 variables that exhibit particular patterns such as nose, lip, jaw and cheek movements in monosyllable vocalizations. Fourteen subjects, all in 20s of age, were asked to vocalize 11 types of Korean vowel monosyllables for three times with 36 reflective markers on their faces. The obtained facial movement data were then calculated into 11 parameters and presented as patterns for each monosyllable vocalization. The parameter patterns were performed through learning and recognizing process for each monosyllable with speech recognition algorithms with Hidden Markov Model (HMM) and Viterbi algorithm. The accuracy rate of 11 monosyllables recognition was 97.2%, which suggests the possibility of voice recognition of Korean language through quantitative facial movement analysis.

3D Facial Landmark Tracking and Facial Expression Recognition

  • Medioni, Gerard;Choi, Jongmoo;Labeau, Matthieu;Leksut, Jatuporn Toy;Meng, Lingchao
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.3
    • /
    • pp.207-215
    • /
    • 2013
  • In this paper, we address the challenging computer vision problem of obtaining a reliable facial expression analysis from a naturally interacting person. We propose a system that combines a 3D generic face model, 3D head tracking, and 2D tracker to track facial landmarks and recognize expressions. First, we extract facial landmarks from a neutral frontal face, and then we deform a 3D generic face to fit the input face. Next, we use our real-time 3D head tracking module to track a person's head in 3D and predict facial landmark positions in 2D using the projection from the updated 3D face model. Finally, we use tracked 2D landmarks to update the 3D landmarks. This integrated tracking loop enables efficient tracking of the non-rigid parts of a face in the presence of large 3D head motion. We conducted experiments for facial expression recognition using both framebased and sequence-based approaches. Our method provides a 75.9% recognition rate in 8 subjects with 7 key expressions. Our approach provides a considerable step forward toward new applications including human-computer interactions, behavioral science, robotics, and game applications.

Extraction of Facial Region Using Fuzzy Color Filter (퍼지 색상 필터를 이용한 얼굴 영역 추출)

  • Kim, M.H.;Park, J.B.;Jung, K.H.;Joo, Y.H.;Lee, J.;Cho, Y.J.
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.147-149
    • /
    • 2004
  • There are no authentic solutions in a face region extraction problem though it is an important part of pattern recognition and has diverse application fields. It is not easy to develop the facial region extraction algorithm because the facial image is very sensitive according to age, sex, and illumination. In this paper, to solve these difficulties, a fuzzy color filer based on the facial region extraction algorithm is proposed. The fuzzy color filter makes the robust facial region extraction enable by modeling the skin color. Especially, it is robust in facial region extraction with various illuminations. In addition, to identify the fuzzy color filter, a linear matrix inequality(LMI) optimization method is used. Finally, the simulation result is given to confirm the superiority of the proposed algorithm.

  • PDF

Facial Expression Recognition with Fuzzy C-Means Clusstering Algorithm and Neural Network Based on Gabor Wavelets

  • Youngsuk Shin;Chansup Chung;Lee, Yillbyung
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.126-132
    • /
    • 2000
  • This paper presents a facial expression recognition based on Gabor wavelets that uses a fuzzy C-means(FCM) clustering algorithm and neural network. Features of facial expressions are extracted to two steps. In the first step, Gabor wavelet representation can provide edges extraction of major face components using the average value of the image's 2-D Gabor wavelet coefficient histogram. In the next step, we extract sparse features of facial expressions from the extracted edge information using FCM clustering algorithm. The result of facial expression recognition is compared with dimensional values of internal stated derived from semantic ratings of words related to emotion. The dimensional model can recognize not only six facial expressions related to Ekman's basic emotions, but also expressions of various internal states.

  • PDF

A Study on Facial Blendshape Rig Cloning Method Based on Deformation Transfer Algorithm (메쉬 변형 전달 기법을 통한 블렌드쉐입 페이셜 리그 복제에 대한 연구)

  • Song, Jaewon;Im, Jaeho;Lee, Dongha
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.9
    • /
    • pp.1279-1284
    • /
    • 2021
  • This paper addresses the task of transferring facial blendshape models to an arbitrary target face. Blendshape is a common method for the facial rig; however, production of blendshape rig is a time-consuming process in the current facial animation pipeline. We propose automatic blendshape facial rigging based on our blendshape transfer method. Our method computes the difference between source and target facial model and then transfers the source blendshape to the target face based on a deformation transfer algorithm. Our automatic method provides efficient production of a controllable digital human face; the results can be applied to various applications such as games, VR chating, and AI agent services.