• Title/Summary/Keyword: Facial Model

Search Result 522, Processing Time 0.026 seconds

A Study on the Realization of Virtual Simulation Face Based on Artificial Intelligence

  • Zheng-Dong Hou;Ki-Hong Kim;Gao-He Zhang;Peng-Hui Li
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.2
    • /
    • pp.152-158
    • /
    • 2023
  • In recent years, as computer-generated imagery has been applied to more industries, realistic facial animation is one of the important research topics. The current solution for realistic facial animation is to create realistic rendered 3D characters, but the 3D characters created by traditional methods are always different from the actual characters and require high cost in terms of staff and time. Deepfake technology can achieve the effect of realistic faces and replicate facial animation. The facial details and animations are automatically done by the computer after the AI model is trained, and the AI model can be reused, thus reducing the human and time costs of realistic face animation. In addition, this study summarizes the way human face information is captured and proposes a new workflow for video to image conversion and demonstrates that the new work scheme can obtain higher quality images and exchange effects by evaluating the quality of No Reference Image Quality Assessment.

Facial Expression Recognition Method Based on Residual Masking Reconstruction Network

  • Jianing Shen;Hongmei Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.3
    • /
    • pp.323-333
    • /
    • 2023
  • Facial expression recognition can aid in the development of fatigue driving detection, teaching quality evaluation, and other fields. In this study, a facial expression recognition method was proposed with a residual masking reconstruction network as its backbone to achieve more efficient expression recognition and classification. The residual layer was used to acquire and capture the information features of the input image, and the masking layer was used for the weight coefficients corresponding to different information features to achieve accurate and effective image analysis for images of different sizes. To further improve the performance of expression analysis, the loss function of the model is optimized from two aspects, feature dimension and data dimension, to enhance the accurate mapping relationship between facial features and emotional labels. The simulation results show that the ROC of the proposed method was maintained above 0.9995, which can accurately distinguish different expressions. The precision was 75.98%, indicating excellent performance of the facial expression recognition model.

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

The Facial Area Extraction Using Multi-Channel Skin Color Model and The Facial Recognition Using Efficient Feature Vectors (Multi-Channel 피부색 모델을 이용한 얼굴영역추출과 효율적인 특징벡터를 이용한 얼굴 인식)

  • Choi Gwang-Mi;Kim Hyeong-Gyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.7
    • /
    • pp.1513-1517
    • /
    • 2005
  • In this paper, I make use of a Multi-Channel skin color model with Hue, Cb, Cg using Red, Blue, Green channel altogether which remove bight component as being consider the characteristics of skin color to do modeling more effective to a facial skin color for extracting a facial area. 1 used efficient HOLA(Higher order local autocorrelation function) using 26 feature vectors to obtain both feature vectors of a facial area and the edge image extraction using Harr wavelet in image which split a facial area. Calculated feature vectors are used of date for the facial recognition through learning of neural network It demonstrate improvement in both the recognition rate and speed by proposed algorithm through simulation.

The Study of Skeleton System for Facial Expression Animation (Skeleton System으로 운용되는 얼굴표정 애니메이션에 관한 연구)

  • Oh, Seong-Suk
    • Journal of Korea Game Society
    • /
    • v.8 no.2
    • /
    • pp.47-55
    • /
    • 2008
  • This paper introduces that SSFE(Skeleton System for Facial Expression) to deform facial expressions by rigging of skeletons does same functions with 14 facial muscles based on anatomy. A three dimensional animation tool (MAYA 8.5) is utilized for making the SSFE that presents deformation of mesh models implementing facial expressions around eyes, nose and mouse. The SSFE has a good reusability within diverse human mesh models. The reusability of SSFE can be understood as OSMU(One Source Multi Use) of three dimensional animation production method. It can be a good alternative technique for reducing production budget of animations. It can also be used for three dimensional animation industries such as virtual reality and game.

  • PDF

Fake News Detection on Social Media using Video Information: Focused on YouTube (영상정보를 활용한 소셜 미디어상에서의 가짜 뉴스 탐지: 유튜브를 중심으로)

  • Chang, Yoon Ho;Choi, Byoung Gu
    • The Journal of Information Systems
    • /
    • v.32 no.2
    • /
    • pp.87-108
    • /
    • 2023
  • Purpose The main purpose of this study is to improve fake news detection performance by using video information to overcome the limitations of extant text- and image-oriented studies that do not reflect the latest news consumption trend. Design/methodology/approach This study collected video clips and related information including news scripts, speakers' facial expression, and video metadata from YouTube to develop fake news detection model. Based on the collected data, seven combinations of related information (i.e. scripts, video metadata, facial expression, scripts and video metadata, scripts and facial expression, and scripts, video metadata, and facial expression) were used as an input for taining and evaluation. The input data was analyzed using six models such as support vector machine and deep neural network. The area under the curve(AUC) was used to evaluate the performance of classification model. Findings The results showed that the ACU and accuracy values of three features combination (scripts, video metadata, and facial expression) were the highest in logistic regression, naïve bayes, and deep neural network models. This result implied that the fake news detection could be improved by using video information(video metadata and facial expression). Sample size of this study was relatively small. The generalizablity of the results would be enhanced with a larger sample size.

The Effects of Perceived Facial Attractiveness and Appropriateness of Clothing on the Task Performance Evaluation mediated by Likability and the Trait Evaluation (지각된 얼굴 매력성과 의복 적절성이 호감도, 특질 판단을 매개하여 과제 수행능력 판단에 미치는 영향)

  • 정명선;김재숙
    • Journal of the Korean Society of Costume
    • /
    • v.51 no.8
    • /
    • pp.77-91
    • /
    • 2001
  • The purpose of this study was to investigate the effects of the perceived facial attractiveness and appropriateness of clothing on the evaluation of task performance of target person mediated by subjects'likability toward and trait evaluation of the target person. The facial attractiveness of the female university students were used as index of physical attractiveness in this study. Three levels of facial attractiveness was manipulated based on the judgements by 30 female university students. Four types of clothes were selected perceived appropriate for two assumed situations by female university students. Three female faces having high. medium, and low attractiveness were simulated with the same body dressed four types of clothing respectively using CAD system, and a total of 12 stimulus persons were created. The design for the experiment was a $3\tiems4\times2$ randomaized factorial. with three levels of facial attractiveness(high, medium, low), and four types attire(formal-masculine, formal-feminine, casual-masculine, casual-feminine), two kinds of context (job interview, dating) in which perceptions were occurred. The subjects of this study was 524 male and female(262 of male, 262 of female) university students from 3 universities in Kwangju, Korea. The data were analysed using factor analysis. descriptive statistics, regression, path analysis. The results were as follows : 1. In bogus job interview. the direct effect of perceived facial attractiveness on task performance evaluation was .175 and the indirect effect mediated by likability and trait evaluation was .285 in path analysis model. The direct effect of perceived appropriateness of clothing on task performance evaluation was .111 and the indirect effect mediated by likability only was .0564 in pass analysis model. 2. In dating situation, the direct effect of perceived facial attractiveness on task performance evaluation was .355, the indirect effect mediated by likability and trait evaluation was .188 in path analysis model. The direct effect of perceived appropriateness of clothing on task performance evaluation was .108, the indirect effect mediated by likability and trait evaluation was .060 in Pass analysis.

  • PDF

Optimal Facial Emotion Feature Analysis Method based on ASM-LK Optical Flow (ASM-LK Optical Flow 기반 최적 얼굴정서 특징분석 기법)

  • Ko, Kwang-Eun;Park, Seung-Min;Park, Jun-Heong;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.512-517
    • /
    • 2011
  • In this paper, we propose an Active Shape Model (ASM) and Lucas-Kanade (LK) optical flow-based feature extraction and analysis method for analyzing the emotional features from facial images. Considering the facial emotion feature regions are described by Facial Action Coding System, we construct the feature-related shape models based on the combination of landmarks and extract the LK optical flow vectors at each landmarks based on the centre pixels of motion vector window. The facial emotion features are modelled by the combination of the optical flow vectors and the emotional states of facial image can be estimated by the probabilistic estimation technique, such as Bayesian classifier. Also, we extract the optimal emotional features that are considered the high correlation between feature points and emotional states by using common spatial pattern (CSP) analysis in order to improvise the operational efficiency and accuracy of emotional feature extraction process.

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

Human Emotion Recognition based on Variance of Facial Features (얼굴 특징 변화에 따른 휴먼 감성 인식)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.16 no.4
    • /
    • pp.79-85
    • /
    • 2017
  • Understanding of human emotion has a high importance in interaction between human and machine communications systems. The most expressive and valuable way to extract and recognize the human's emotion is by facial expression analysis. This paper presents and implements an automatic extraction and recognition scheme of facial expression and emotion through still image. This method has three main steps to recognize the facial emotion: (1) Detection of facial areas with skin-color method and feature maps, (2) Creation of the Bezier curve on eyemap and mouthmap, and (3) Classification and distinguish the emotion of characteristic with Hausdorff distance. To estimate the performance of the implemented system, we evaluate a success-ratio with emotional face image database, which is commonly used in the field of facial analysis. The experimental result shows average 76.1% of success to classify and distinguish the facial expression and emotion.

  • PDF