• Title/Summary/Keyword: Facial Model

Search Result 522, Processing Time 0.033 seconds

The Study of Face Model and Face Type (사상인 용모분석을 위한 얼굴표준 및 얼굴유형에 대한 연구현황)

  • Pyeon, Young-Beom;Kwak, Chang-Kyu;Yoo, Jung-Hee;Kim, Jong-Won;Kim, Kyu-Kon;Kho, Byung-Hee;Lee, Eui-Ju
    • Journal of Sasang Constitutional Medicine
    • /
    • v.18 no.2
    • /
    • pp.25-33
    • /
    • 2006
  • 1. Objectives Recently there have been studied the trials to take out the characteristics of Sasangin's face. 3 dimensional modeling is essential to find out sasangin's face. So the studies of standard face model and face type are necessary. 2. Methods I have reviewed the researches of standard facial modeling and facial type in the inside and outside of the country. 3. Results and Conclusions The Facial Definition Parameters are a very complex set of parameters defined by MPEG-4. It has defineds set of 84 feature points and 68 Facial Animation Parameters. Face type has been researched to divide into male and female, or the westerns and the orientals, or sasangin(Taeyangin, Taeumin, Soyangin, Soeumin).

  • PDF

Facial Action Unit Detection with Multilayer Fused Multi-Task and Multi-Label Deep Learning Network

  • He, Jun;Li, Dongliang;Bo, Sun;Yu, Lejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5546-5559
    • /
    • 2019
  • Facial action units (AUs) have recently drawn increased attention because they can be used to recognize facial expressions. A variety of methods have been designed for frontal-view AU detection, but few have been able to handle multi-view face images. In this paper we propose a method for multi-view facial AU detection using a fused multilayer, multi-task, and multi-label deep learning network. The network can complete two tasks: AU detection and facial view detection. AU detection is a multi-label problem and facial view detection is a single-label problem. A residual network and multilayer fusion are applied to obtain more representative features. Our method is effective and performs well. The F1 score on FERA 2017 is 13.1% higher than the baseline. The facial view recognition accuracy is 0.991. This shows that our multi-task, multi-label model could achieve good performance on the two tasks.

Facial Expression Recognition with Instance-based Learning Based on Regional-Variation Characteristics Using Models-based Feature Extraction (모델기반 특징추출을 이용한 지역변화 특성에 따른 개체기반 표정인식)

  • Park, Mi-Ae;Ko, Jae-Pil
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.11
    • /
    • pp.1465-1473
    • /
    • 2006
  • In this paper, we present an approach for facial expression recognition using Active Shape Models(ASM) and a state-based model in image sequences. Given an image frame, we use ASM to obtain the shape parameter vector of the model while we locate facial feature points. Then, we can obtain the shape parameter vector set for all the frames of an image sequence. This vector set is converted into a state vector which is one of the three states by the state-based model. In the classification step, we use the k-NN with the proposed similarity measure that is motivated on the observation that the variation-regions of an expression sequence are different from those of other expression sequences. In the experiment with the public database KCFD, we demonstrate that the proposed measure slightly outperforms the binary measure in which the recognition performance of the k-NN with the proposed measure and the existing binary measure show 89.1% and 86.2% respectively when k is 1.

  • PDF

Region-Based Reconstruction Method for Resolution Enhancement of Low-Resolution Facial Image (저해상도 얼굴 영상의 해상도 개선을 위한 영역 기반 복원 방법)

  • Park, Jeong-Seon
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.5
    • /
    • pp.476-486
    • /
    • 2007
  • This paper proposes a resolution enhancement method which can reconstruct high-resolution facial images from single-frame, low-resolution facial images. The proposed method is derived from example-based reconstruction methods and the morphable face model. In order to improve the performance of the example-based reconstruction, we propose the region-based reconstruction method which can maintain the characteristics of local facial regions. Also, in order to use the capability of the morphable face model to face resolution enhancement problems, we define the extended morphable face model in which an extended face is composed of a low-resolution face, its interpolated high-resolution face, and the high-resolution equivalent, and then an extended face is separated by an extended shape vector and an extended texture vector. The encouraging results show that the proposed methods can be used to improve the performance of face recognition systems, particularly to enhance the resolution of facial images captured from visual surveillance systems.

Use of 3D Printing Model for the Management of Fibrous Dysplasia: Preliminary Case Study

  • Choi, Jong-Woo;Jeong, Woo Shik
    • Journal of International Society for Simulation Surgery
    • /
    • v.3 no.1
    • /
    • pp.36-38
    • /
    • 2016
  • Fibrous dysplasia is a relatively rare disease but the management would be quite challenging. Because this is not a malignant tumor, the preservation of the facial contour and the various functions seems to be important in treatment planning. Until now the facial bone reconstruction with autogenous bone would be the standard. Although the autogenous bone would be the ideal one for facial bone reconstruction, donor site morbidity would be the inevitable problem in many cases. Meanwhile, various types of allogenic and alloplastic materials have been also used. However, facial bone reconstruction with many alloplastic material have produced no less complications including infection, exposure, and delayed wound healing. Because the 3D printing technique evolved so fast that 3D printed titanium implant were possible recently. The aim of this trial is to try to restore the original maxillary anatomy as possible using the 3D printing model, based on the mirrored three dimensional CT images based on the computer simulation. Preoperative computed tomography (CT) data were processed for the patient and a rapid prototyping (RP) model was produced. At the same time, the uninjured side was mirrored and superimposed onto the traumatized side, to create a mirror-image of the RP model. And we molded Titanium mesh to reconstruct three-dimensional maxillary structure during the operation. This prefabricated Titanium-mesh implant was then inserted onto the defected maxilla and fixed. Three dimensional printing technique of titanium material based on the computer simulation turned out to be successful in this patient. Individualized approach for each patient could be an ideal way to restore the facial bone.

Prediction Model for Hypertriglyceridemia Based on Naive Bayes Using Facial Characteristics (안면 정보를 이용한 나이브 베이즈 기반 고중성지방혈증 예측 모델)

  • Lee, Juwon;Lee, Bum Ju
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.11
    • /
    • pp.433-440
    • /
    • 2019
  • Recently, machine learning and data mining have been used for many disease prediction and diagnosis. Chronic diseases account for about 80% of the total mortality rate and are increasing gradually. In previous studies, the predictive model for chronic diseases use data such as blood glucose, blood pressure, and insulin levels. In this paper, world's first research, verifies the relationship between dyslipidemia and facial characteristics, and develops the predictive model using machine learning based facial characteristics. Clinical data were obtained from 5390 adult Korean men, and using hypertriglyceridemia and facial characteristics data. Hypertriglyceridemia is a measure of dyslipidemia. The result of this study, find the facial characteristics that highly correlated with hypertriglyceridemia. FD_43_143_aD (p<0.0001, Area Under the receiver operating characteristics Curve(AUC)=0.652) is the best indicator of this study. FD_43_143_aD means distance between mandibular. The model based on this result obtained AUC value of 0.662. These results will provide a basis for predicting various diseases with only facial characteristics in the screening stage of disease epidemiology and public health in the future.

Automatic Anticipation Generation for 3D Facial Animation (3차원 얼굴 표정 애니메이션을 위한 기대효과의 자동 생성)

  • Choi Jung-Ju;Kim Dong-Sun;Lee In-Kwon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.39-48
    • /
    • 2005
  • According to traditional 2D animation techniques, anticipation makes an animation much convincing and expressive. We present an automatic method for inserting anticipation effects to an existing facial animation. Our approach assumes that an anticipatory facial expression can be found within an existing facial animation if it is long enough. Vertices of the face model are classified into a set of components using principal components analysis directly from a given hey-framed and/or motion -captured facial animation data. The vortices in a single component will have similar directions of motion in the animation. For each component, the animation is examined to find an anticipation effect for the given facial expression. One of those anticipation effects is selected as the best anticipation effect, which preserves the topology of the face model. The best anticipation effect is automatically blended with the original facial animation while preserving the continuity and the entire duration of the animation. We show experimental results for given motion-captured and key-framed facial animations. This paper deals with a part of broad subject an application of the principles of traditional 2D animation techniques to 3D animation. We show how to incorporate anticipation into 3D facial animation. Animators can produce 3D facial animation with anticipation simply by selecting the facial expression in the animation.

A Recognition Framework for Facial Expression by Expression HMM and Posterior Probability (표정 HMM과 사후 확률을 이용한 얼굴 표정 인식 프레임워크)

  • Kim, Jin-Ok
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.3
    • /
    • pp.284-291
    • /
    • 2005
  • I propose a framework for detecting, recognizing and classifying facial features based on learned expression patterns. The framework recognizes facial expressions by using PCA and expression HMM(EHMM) which is Hidden Markov Model (HMM) approach to represent the spatial information and the temporal dynamics of the time varying visual expression patterns. Because the low level spatial feature extraction is fused with the temporal analysis, a unified spatio-temporal approach of HMM to common detection, tracking and classification problems is effective. The proposed recognition framework is accomplished by applying posterior probability between current visual observations and previous visual evidences. Consequently, the framework shows accurate and robust results of recognition on as well simple expressions as basic 6 facial feature patterns. The method allows us to perform a set of important tasks such as facial-expression recognition, HCI and key-frame extraction.

ADENOVIRAL VECTOR MEDIATED IN VIVO GENE TRANSFER OF BDNF PROMOTE FUNCTIONAL RECOVERY AFTER FACIAL NERVE CRUSH INJURY (안면신경 압박손상 후 Adenovirus 매개 BDNF 유전자 전달을 통한 신경손상 회복에 관한 연구)

  • Yang, Byoung-Eun;Lee, Jong-Ho
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.32 no.4
    • /
    • pp.308-316
    • /
    • 2006
  • Objectives Despite considerable advances in technique, experience and skill, the precise place of surgery in the treatment of facial nerve injury remains uncertain. We designed a facial nerve crush injury model in rats and evaluated the recovery of crushed nerve which is the most common injury type of facial nerve using adenovirus vector mediated in vivo gene transfer of Brain derived neurotrophic factor(BDNF). Materials and methods In 48 Sprague Dawley rats, we made a facial nerve crush injury model to main trunk before the furcation, and injected a $10^{11}$pfu adenoviral BDNF in experimental group(BDNF adenoviral injection group; ad-BDNF) and $3{\mu}l$ saline in control group(Saline injection group; saline). After a period of regeneration from 10 to 40 days, nerve regeneration was evaluated with functioinal test (vibrissae and ocular movement), electrophysiologic study(threshold, peak voltage, conduction velocity) and histomorphometric study of axon density. Results Vibrissae and ocular movement, threshold and conduction velocity improved as time elapse in both group, however axon density was increased significantly only in experimental group. Functional test in 10 days and 20 days showed no difference between experimental group and control group. Vibrissae movement, threshold, conduction velocity and axon density in 30 days revealed that the regeneration in quality of experimental group was significantly superior to that of control group. Conclusion In general, there is tendency for nerve regeneration in experimental group (BDNF-adenovirus injection group) during 40 days, functional recovery was detected successfully after facial nerve crush in 30 days postoperatively.

Proportions of the aesthetic African-Caribbean face: idealized ratios, comparison with the golden proportion and perceptions of attractiveness

  • Mantelakis, Angelos;Iosifidis, Michalis;Al-Bitar, Zaid B.;Antoniadis, Vyron;Wertheim, David;Garagiola, Umberto;Naini, Farhad B.
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.40
    • /
    • pp.20.1-20.10
    • /
    • 2018
  • Background: In the absence of clear guidelines for facial aesthetic surgery, most surgeons rely on expert intuitive judgement when planning aesthetic and reconstructive surgery. One of the most famous theories regarding "ideal" facial proportions is that of the golden proportion. However, there are conflicting opinions as to whether it can be used to assess facial attractiveness. The aim of this investigation was to assess facial ratios of professional black models and to compare the ratios with the golden proportion. Methods: Forty photographs of male and female professional black models were collected. Observers were asked to assign a score from 1 to 10 (1 = not very attractive, 10 = very attractive). A total of 287 responses were analysed for grading behaviour according to various demographic factors by two groups of observers. The best graded photographs were compared with the least well-graded photographs to identify any differences in their facial ratios. The models' facial ratios were calculated and compared with the golden proportion. Results: Differences in grading behaviour were observed amongst the two assessment groups. Only one out of the 12 facial ratios was not significantly different from the golden proportion. Conclusions: Only one facial ratio was observed to be similar to the golden proportion in professional model facial photographs. No correlation was found between facial ratios in professional black models with the golden proportion. It is proposed that an individualistic treatment for each ratio is a rather better method to guide future practice.