• Title/Summary/Keyword: Facial motion

Search Result 157, Processing Time 0.025 seconds

Interactive Facial Expression Animation of Motion Data using CCA (CCA 투영기법을 사용한 모션 데이터의 대화식 얼굴 표정 애니메이션)

  • Kim Sung-Ho
    • Journal of Internet Computing and Services
    • /
    • v.6 no.1
    • /
    • pp.85-93
    • /
    • 2005
  • This paper describes how to distribute high multi-dimensional facial expression data of vast quantity over a suitable space and produce facial expression animations by selecting expressions while animator navigates this space in real-time. We have constructed facial spaces by using about 2400 facial expression frames on this paper. These facial spaces are created by calculating of the shortest distance between two random expressions. The distance between two points In the space of expression, which is manifold space, is described approximately as following; When the linear distance of them is shorter than a decided value, if the two expressions are adjacent after defining the expression state vector of facial status using distance matrix expressing distance between two markers, this will be considered as the shortest distance (manifold distance) of the two expressions. Once the distance of those adjacent expressions was decided, We have taken a Floyd algorithm connecting these adjacent distances to yield the shortest distance of the two expressions. We have used CCA(Curvilinear Component Analysis) technique to visualize multi-dimensional spaces, the form of expressing space, into two dimensions. While the animators navigate this two dimensional spaces, they produce a facial animation by using user interface in real-time.

  • PDF

3D Facial Landmark Tracking and Facial Expression Recognition

  • Medioni, Gerard;Choi, Jongmoo;Labeau, Matthieu;Leksut, Jatuporn Toy;Meng, Lingchao
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.3
    • /
    • pp.207-215
    • /
    • 2013
  • In this paper, we address the challenging computer vision problem of obtaining a reliable facial expression analysis from a naturally interacting person. We propose a system that combines a 3D generic face model, 3D head tracking, and 2D tracker to track facial landmarks and recognize expressions. First, we extract facial landmarks from a neutral frontal face, and then we deform a 3D generic face to fit the input face. Next, we use our real-time 3D head tracking module to track a person's head in 3D and predict facial landmark positions in 2D using the projection from the updated 3D face model. Finally, we use tracked 2D landmarks to update the 3D landmarks. This integrated tracking loop enables efficient tracking of the non-rigid parts of a face in the presence of large 3D head motion. We conducted experiments for facial expression recognition using both framebased and sequence-based approaches. Our method provides a 75.9% recognition rate in 8 subjects with 7 key expressions. Our approach provides a considerable step forward toward new applications including human-computer interactions, behavioral science, robotics, and game applications.

Facial Characteristic Point Extraction for Representation of Facial Expression (얼굴 표정 표현을 위한 얼굴 특징점 추출)

  • Oh, Jeong-Su;Kim, Jin-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.1
    • /
    • pp.117-122
    • /
    • 2005
  • This paper proposes an algorithm for Facial Characteristic Point(FCP) extraction. The FCP plays an important role in expression representation for face animation, avatar mimic or facial expression recognition. Conventional algorithms extract the FCP with an expensive motion capture device or by using markers, which give an inconvenience or a psychological load to experimental person. However, the proposed algorithm solves the problems by using only image processing. For the efficient FCP extraction, we analyze and improve the conventional algorithms detecting facial components, which are basis of the FCP extraction.

Recognizing Human Facial Expressions and Gesture from Image Sequence (연속 영상에서의 얼굴표정 및 제스처 인식)

  • 한영환;홍승홍
    • Journal of Biomedical Engineering Research
    • /
    • v.20 no.4
    • /
    • pp.419-425
    • /
    • 1999
  • In this paper, we present an algorithm of real time facial expression and gesture recognition for image sequence on the gray level. A mixture algorithm of a template matching and knowledge based geometrical consideration of a face were adapted to locate the face area in input image. And optical flow method applied on the area to recognize facial expressions. Also, we suggest hand area detection algorithm form a background image by analyzing entropy in an image. With modified hand area detection algorithm, it was possible to recognize hand gestures from it. As a results, the experiments showed that the suggested algorithm was good at recognizing one's facial expression and hand gesture by detecting a dominant motion area on images without getting any limits from the background image.

  • PDF

Optimal Facial Emotion Feature Analysis Method based on ASM-LK Optical Flow (ASM-LK Optical Flow 기반 최적 얼굴정서 특징분석 기법)

  • Ko, Kwang-Eun;Park, Seung-Min;Park, Jun-Heong;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.512-517
    • /
    • 2011
  • In this paper, we propose an Active Shape Model (ASM) and Lucas-Kanade (LK) optical flow-based feature extraction and analysis method for analyzing the emotional features from facial images. Considering the facial emotion feature regions are described by Facial Action Coding System, we construct the feature-related shape models based on the combination of landmarks and extract the LK optical flow vectors at each landmarks based on the centre pixels of motion vector window. The facial emotion features are modelled by the combination of the optical flow vectors and the emotional states of facial image can be estimated by the probabilistic estimation technique, such as Bayesian classifier. Also, we extract the optimal emotional features that are considered the high correlation between feature points and emotional states by using common spatial pattern (CSP) analysis in order to improvise the operational efficiency and accuracy of emotional feature extraction process.

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • Kim, Tae-Woo;Kang, Yong-Seok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.53-60
    • /
    • 2009
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking; and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • 박호식;정연숙;손동주;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.603-607
    • /
    • 2004
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF

Interactive Realtime Facial Animation with Motion Data (모션 데이터를 사용한 대화식 실시간 얼굴 애니메이션)

  • 김성호
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.4
    • /
    • pp.569-578
    • /
    • 2003
  • This paper presents a method in which the user produces a real-time facial animation by navigating in the space of facial expressions created from a great number of captured facial expressions. The core of the method is define the distance between each facial expressions and how to distribute into suitable intuitive space using it and user interface to generate realtime facial expression animation in this space. We created the search space from about 2,400 raptured facial expression frames. And, when the user free travels through the space, facial expressions located on the path are displayed in sequence. To visually distribute about 2,400 captured racial expressions in the space, we need to calculate distance between each frames. And we use Floyd's algorithm to get all-pairs shortest path between each frames, then get the manifold distance using it. The distribution of frames in intuitive space apply a multi-dimensional scaling using manifold distance of facial expression frames, and distributed in 2D space. We distributed into intuitive space with keep distance between facial expression frames in the original form. So, The method presented at this paper has large advantage that free navigate and not limited into intuitive space to generate facial expression animation because of always existing the facial expression frames to navigate by user. Also, It is very efficient that confirm and regenerate nth realtime generation using user interface easy to use for facial expression animation user want.

  • PDF

Design and Implementation of Walking Motions Applied with Player's Emotion Factors According to Variable Statistics of RPG Game Character (RPG게임캐릭터의 능력치변화량에 따라 감정요소가 적용된 걷기동작 구현)

  • Kang, Hyun-Ah;Kim, Mi-Jin
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.5
    • /
    • pp.63-71
    • /
    • 2007
  • From several commercialized games the technique of changing facial expressions is imported, and a design method of a game character for the player's empathy is expected to be diversified in the future. In this paper, as a design method of a game character for the player's empathy, this makes walking motion for the game character applied with 'human-emotion' factors as statistics variation of a game character in RPG genre. After this paper implements analyzed emotions of human facial expression and walking motions applied with emotion in examples of character animation theory, this paper divides walking motion applied with human-emotion factors into 8 types through relationship to statistics factors in RPG genre. And then these are applied to a knight character, which has the most similarity with human's physical feature of the game characters in RPG genre, and makes walking motion as variable statistics. As a game player controls the game character applied with 'human-emotion' factors, the effect of the player's empathy about the game character becomes higher, and the level of immersion in game play is also expected to increase.

The Effect of Korean Medicine Treatments on Facial Asymmetry: A Case Report (안면 비대칭에 대한 한의학적 치료의 효과: 증례보고)

  • Kwon, Chan-Young;Lee, Hoon-Hui;Im, Yong-Seok
    • Korean Journal of Acupuncture
    • /
    • v.34 no.3
    • /
    • pp.179-184
    • /
    • 2017
  • Objectives : Despite the applicability of Korean Medicine(KM) treatments for facial asymmetry, no relevant study has been reported. In this case report, we report the effect and safety of KM treatments on facial asymmetry by mandibular lateral displacement. Methods : Three patients suffering from facial asymmetry received twelve KM treatment sessions composed of Motion Style Treatment(MST), Yinyang Balance Appliance(YBA) of Functional Cerebrospinal Therapy(FCST), and Instrument Assisted Soft Tissue Mobilization(IASTM). The photos of each patient were taken before and after the treatment. And four primary reference lines were assessed before and after the treatment. Results : All subjects were improved after KM treatments on photos. However, no statistical significance was observed. Conclusions : This case report is the first to introduce the effect of KM treatments on facial asymmetry. Further well-designed, randomized, placebo-controlled trials are needed to verify these results.