• Title/Summary/Keyword: 3-D facial model

Search Result 135, Processing Time 0.024 seconds

Automatic Generation of the Personal 3D Face Model (3차원 개인 얼굴 모델 자동 생성)

  • Ham, Sang-Jin;Kim, Hyoung-Gon
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.1
    • /
    • pp.104-114
    • /
    • 1999
  • This paper proposes an efficient method for the automatic generation of personalized 3D face model from color image sequence. To detect a robust facial region in a complex background, moving color detection technique based on he facial color distribution has been suggested. Color distribution and edge position information in the detected face region are used to extract the exact 31 facial feature points of the facial description parameter(FDP) proposed by MPEG-4 SNHC(Synthetic-Natural Hybrid Coding) adhoc group. Extracted feature points are then applied to the corresponding vertex points of the 3D generic face model composed of 1038 triangular mesh points. The personalized 3D face model can be generated automatically in less then 2 seconds on Pentium PC.

  • PDF

Human Face Recognition and 3-D Human Face Modelling (얼굴 영상 인식 및 3차원 얼굴 모델 구현 알고리즘)

  • 이효종;이지항
    • Proceedings of the IEEK Conference
    • /
    • 2000.11c
    • /
    • pp.113-116
    • /
    • 2000
  • Human face recognition and 3D human face reconstruction has been studied in this paper. To find the facial feature points, find edge from input image and analysis the accumulated histogram of edge information. This paper use a Generic Face Model to display the 3D human face model which was implement with OpenGL and generated with 500 polygons. For reality of 3D human face model, we propose Group matching mapping method between facial feature points and the one of Generic Face Model. The personalized 3D human face model which resembles real human face can be generated automatically in less than 5 seconds on Pentium PC.

  • PDF

A Study on Improvement of Face Recognition Rate with Transformation of Various Facial Poses and Expressions (얼굴의 다양한 포즈 및 표정의 변환에 따른 얼굴 인식률 향상에 관한 연구)

  • Choi Jae-Young;Whangbo Taeg-Keun;Kim Nak-Bin
    • Journal of Internet Computing and Services
    • /
    • v.5 no.6
    • /
    • pp.79-91
    • /
    • 2004
  • Various facial pose detection and recognition has been a difficult problem. The problem is due to the fact that the distribution of various poses in a feature space is mere dispersed and more complicated than that of frontal faces, This thesis proposes a robust pose-expression-invariant face recognition method in order to overcome insufficiency of the existing face recognition system. First, we apply the TSL color model for detecting facial region and estimate the direction of face using facial features. The estimated pose vector is decomposed into X-V-Z axes, Second, the input face is mapped by deformable template using this vectors and 3D CANDIDE face model. Final. the mapped face is transformed to frontal face which appropriates for face recognition by the estimated pose vector. Through the experiments, we come to validate the application of face detection model and the method for estimating facial poses, Moreover, the tests show that recognition rate is greatly boosted through the normalization of the poses and expressions.

  • PDF

Monosyllable Speech Recognition through Facial Movement Analysis (안면 움직임 분석을 통한 단음절 음성인식)

  • Kang, Dong-Won;Seo, Jeong-Woo;Choi, Jin-Seung;Choi, Jae-Bong;Tack, Gye-Rae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.6
    • /
    • pp.813-819
    • /
    • 2014
  • The purpose of this study was to extract accurate parameters of facial movement features using 3-D motion capture system in speech recognition technology through lip-reading. Instead of using the features obtained through traditional camera image, the 3-D motion system was used to obtain quantitative data for actual facial movements, and to analyze 11 variables that exhibit particular patterns such as nose, lip, jaw and cheek movements in monosyllable vocalizations. Fourteen subjects, all in 20s of age, were asked to vocalize 11 types of Korean vowel monosyllables for three times with 36 reflective markers on their faces. The obtained facial movement data were then calculated into 11 parameters and presented as patterns for each monosyllable vocalization. The parameter patterns were performed through learning and recognizing process for each monosyllable with speech recognition algorithms with Hidden Markov Model (HMM) and Viterbi algorithm. The accuracy rate of 11 monosyllables recognition was 97.2%, which suggests the possibility of voice recognition of Korean language through quantitative facial movement analysis.

Feature Extraction Method of 2D-DCT for Facial Expression Recognition (얼굴 표정인식을 위한 2D-DCT 특징추출 방법)

  • Kim, Dong-Ju;Lee, Sang-Heon;Sohn, Myoung-Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.3
    • /
    • pp.135-138
    • /
    • 2014
  • This paper devices a facial expression recognition method robust to overfitting using 2D-DCT and EHMM algorithm. In particular, this paper achieves enhanced recognition performance by setting up a large window size for 2D-DCT feature extraction and extracting the observation vectors of EHMM. The experimental results on the CK facial expression database and the JAFFE facial expression database showed that the facial expression recognition accuracy was improved according as window size is large. Also, the proposed method revealed the recognition accuracy of 87.79% and showed enhanced recognition performance ranging from 46.01% to 50.05% in comparison to previous approaches based on histogram feature, when CK database is employed for training and JAFFE database is used to test the recognition accuracy.

facial Expression Animation Using 3D Face Modelling of Anatomy Base (해부학 기반의 3차원 얼굴 모델링을 이용한 얼굴 표정 애니메이션)

  • 김형균;오무송
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.2
    • /
    • pp.328-333
    • /
    • 2003
  • This paper did to do with 18 muscle pairs that do fetters in anatomy that influence in facial expression change and mix motion of muscle for face facial animation. After set and change mash and make standard model in individual's image, did mapping to mash using individual facial front side and side image to raise truth stuff. Muscle model who become motive power that can do animation used facial expression creation correcting Waters' muscle model. Created deformed face that texture is dressed using these method. Also, 6 facial expression that Ekman proposes did animation.

A Comic Emotional Expression for 3D Facial Model (3D 얼굴 모델의 코믹한 감정 표현)

  • ;;Shin Tanahashi;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.536-539
    • /
    • 1999
  • In this paper we propose a 3D emotional expression method using a comic model for effective sign-language communications. Until now we have investigated to produce more realistic facial and emotional expression. When representing only emotional expression, however, a comic expression could be better than the real picture of a face. The comic face is a comic-style expression model in which almost components except the necessary parts like eyebrows, eyes, nose and mouth are discarded. We represent emotional expression using Action Units(AU) of Facial Action Coding System(FACS). Experimental results show a possibility that the proposed method could be used efficiently for sign-language image communications.

  • PDF

Automatic Anticipation Generation for 3D Facial Animation (3차원 얼굴 표정 애니메이션을 위한 기대효과의 자동 생성)

  • Choi Jung-Ju;Kim Dong-Sun;Lee In-Kwon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.39-48
    • /
    • 2005
  • According to traditional 2D animation techniques, anticipation makes an animation much convincing and expressive. We present an automatic method for inserting anticipation effects to an existing facial animation. Our approach assumes that an anticipatory facial expression can be found within an existing facial animation if it is long enough. Vertices of the face model are classified into a set of components using principal components analysis directly from a given hey-framed and/or motion -captured facial animation data. The vortices in a single component will have similar directions of motion in the animation. For each component, the animation is examined to find an anticipation effect for the given facial expression. One of those anticipation effects is selected as the best anticipation effect, which preserves the topology of the face model. The best anticipation effect is automatically blended with the original facial animation while preserving the continuity and the entire duration of the animation. We show experimental results for given motion-captured and key-framed facial animations. This paper deals with a part of broad subject an application of the principles of traditional 2D animation techniques to 3D animation. We show how to incorporate anticipation into 3D facial animation. Animators can produce 3D facial animation with anticipation simply by selecting the facial expression in the animation.

A Study on the Realization of Virtual Simulation Face Based on Artificial Intelligence

  • Zheng-Dong Hou;Ki-Hong Kim;Gao-He Zhang;Peng-Hui Li
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.2
    • /
    • pp.152-158
    • /
    • 2023
  • In recent years, as computer-generated imagery has been applied to more industries, realistic facial animation is one of the important research topics. The current solution for realistic facial animation is to create realistic rendered 3D characters, but the 3D characters created by traditional methods are always different from the actual characters and require high cost in terms of staff and time. Deepfake technology can achieve the effect of realistic faces and replicate facial animation. The facial details and animations are automatically done by the computer after the AI model is trained, and the AI model can be reused, thus reducing the human and time costs of realistic face animation. In addition, this study summarizes the way human face information is captured and proposes a new workflow for video to image conversion and demonstrates that the new work scheme can obtain higher quality images and exchange effects by evaluating the quality of No Reference Image Quality Assessment.

Facial Color Control based on Emotion-Color Theory (정서-색채 이론에 기반한 게임 캐릭터의 동적 얼굴 색 제어)

  • Park, Kyu-Ho;Kim, Tae-Yong
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.8
    • /
    • pp.1128-1141
    • /
    • 2009
  • Graphical expressions are continuously improving, spurred by the astonishing growth of the game technology industry. Despite such improvements, users are still demanding a more natural gaming environment and true reflections of human emotions. In real life, people can read a person's moods from facial color and expression. Hence, interactive facial colors in game characters provide a deeper level of reality. In this paper we propose a facial color adaptive technique, which is a combination of an emotional model based on human emotion theory, emotional expression pattern using colors of animation contents, and emotional reaction speed function based on human personality theory, as opposed to past methods that expressed emotion through blood flow, pulse, or skin temperature. Experiments show this of expression of the Facial Color Model based on facial color adoptive technique and expression of the animation contents is effective in conveying character emotions. Moreover, the proposed Facial Color Adaptive Technique can be applied not only to 2D games, but to 3D games as well.

  • PDF