• Title/Summary/Keyword: Facial motion

Search Result 157, Processing Time 0.026 seconds

3D Facial Synthesis and Animation for Facial Motion Estimation (얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션)

  • Park, Do-Young;Shim, Youn-Sook;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.618-631
    • /
    • 2000
  • In this paper, we suggest the method of 3D facial synthesis using the motion of 2D facial images. We use the optical flow-based method for estimation of motion. We extract parameterized motion vectors using optical flow between two adjacent image sequences in order to estimate the facial features and the facial motion in 2D image sequences. Then, we combine parameters of the parameterized motion vectors and estimate facial motion information. We use the parameterized vector model according to the facial features. Our motion vector models are eye area, lip-eyebrow area, and face area. Combining 2D facial motion information with 3D facial model action unit, we synthesize the 3D facial model.

  • PDF

Feature-Oriented Adaptive Motion Analysis For Recognizing Facial Expression (특징점 기반의 적응적 얼굴 움직임 분석을 통한 표정 인식)

  • Noh, Sung-Kyu;Park, Han-Hoon;Shin, Hong-Chang;Jin, Yoon-Jong;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.667-674
    • /
    • 2007
  • Facial expressions provide significant clues about one's emotional state; however, it always has been a great challenge for machine to recognize facial expressions effectively and reliably. In this paper, we report a method of feature-based adaptive motion energy analysis for recognizing facial expression. Our method optimizes the information gain heuristics of ID3 tree and introduces new approaches on (1) facial feature representation, (2) facial feature extraction, and (3) facial feature classification. We use minimal reasonable facial features, suggested by the information gain heuristics of ID3 tree, to represent the geometric face model. For the feature extraction, our method proceeds as follows. Features are first detected and then carefully "selected." Feature "selection" is finding the features with high variability for differentiating features with high variability from the ones with low variability, to effectively estimate the feature's motion pattern. For each facial feature, motion analysis is performed adaptively. That is, each facial feature's motion pattern (from the neutral face to the expressed face) is estimated based on its variability. After the feature extraction is done, the facial expression is classified using the ID3 tree (which is built from the 1728 possible facial expressions) and the test images from the JAFFE database. The proposed method excels and overcomes the problems aroused by previous methods. First of all, it is simple but effective. Our method effectively and reliably estimates the expressive facial features by differentiating features with high variability from the ones with low variability. Second, it is fast by avoiding complicated or time-consuming computations. Rather, it exploits few selected expressive features' motion energy values (acquired from intensity-based threshold). Lastly, our method gives reliable recognition rates with overall recognition rate of 77%. The effectiveness of the proposed method will be demonstrated from the experimental results.

  • PDF

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Facial Expression Animation which Applies a Motion Data in the Vector based Caricature (벡터 기반 캐리커처에 모션 데이터를 적용한 얼굴 표정 애니메이션)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.5
    • /
    • pp.90-98
    • /
    • 2010
  • This paper describes methodology which enables user in order to generate facial expression animation of caricature which applies a facial motion data in the vector based caricature. This method which sees was embodied with the plug-in of illustrator. And It is equipping the user interface of separate way. The data which is used in experiment attaches 28 small-sized markers in important muscular part of the actor face and captured the multiple many expression which is various with Facial Tracker. The caricature was produced in the bezier curve form which has a respectively control point from location of the important marker which attaches in the face of the actor when motion capturing to connection with motion data and the region which is identical. The facial motion data compares in the caricature and the spatial scale went through a motion calibration process too because of size. And with the user letting the control did possibly at any time. In order connecting the caricature and the markers also, we did possibly with the click the corresponding region of the caricature, after the user selects each name of the face region from the menu. Finally, this paper used a user interface of illustrator and in order for the caricature facial expression animation generation which applies a facial motion data in the vector based caricature to be possible.

Effect of Cross-legged Sitting Posture on Joint Range of Motion: Correlation with Musculoskeletal Symptoms and Facial Asymmetry

  • Shin, Yeong hui
    • The Journal of Korean Physical Therapy
    • /
    • v.34 no.5
    • /
    • pp.255-266
    • /
    • 2022
  • Purpose: This study sought to study the effects of cross-legged sitting posture on joint motion. It also examined the correlation between the changes in the joint range of motion, musculoskeletal symptoms, and facial asymmetry. Methods: The Acumar Digital Inclinometer (Lafayette Instrument Company, USA) was used to measure the range of motion (ROM). We measured the flexion and extension of the cervical, thoracic, and lumbar spine using a dual inclinometer, and measured the ROM of the shoulder and hip joint with a single inclinometer. The Likert scale questionnaire was used to investigate musculoskeletal symptoms and facial asymmetry. Results: The data analysis was performed using the Jamovi version 1.6.23 statistical software. After confirming the normality of the ROM with descriptive statistics, it was compared with the normal ROM through a one-sample t-test. Correlation matrix analysis was performed to confirm the association between facial asymmetry and musculoskeletal symptoms. The result of the one-sample t-test showed a significant increase in the thoracic spine extension and right and left hip external rotation (p<0.001***), while most other joints were restricted. As per the frequency analysis, facial asymmetry was found to be 81.70%. Conclusion: The independent variable, namely cross-legged sitting posture led to an increase in ROM. The study also suggests that facial asymmetry and musculoskeletal symptoms could occur. Therefore, to prevent the increase and limitation of ROM and to prevent the occurrence of facial asymmetry and musculoskeletal symptoms, it is suggested that the usual cross-legged sitting posture should be avoided.

3-D Facial Motion Estimation using Extended Kalman Filter (확장 칼만 필터를 이용한 얼굴의 3차원 움직임량 추정)

  • 한승철;박강령김재희
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.883-886
    • /
    • 1998
  • In order to detect the user's gaze position on a monitor by computer vision, the accurate estimations of 3D positions and 3D motion of facial features are required. In this paper, we apply a EKF(Extended Kalman Filter) to estimate 3D motion estimates and assumes that its motion is "smooth" in the sense of being represented as constant velocity translational and rotational model. Rotational motion is defined about the orgin of an face-centered coordinate system, while translational motion is defined about that of a camera centered coordinate system. For the experiments, we use the 3D facial motion data generated by computer simulation. Experiment results show that the simulation data andthe estimation results of EKF are similar.e similar.

  • PDF

Monosyllable Speech Recognition through Facial Movement Analysis (안면 움직임 분석을 통한 단음절 음성인식)

  • Kang, Dong-Won;Seo, Jeong-Woo;Choi, Jin-Seung;Choi, Jae-Bong;Tack, Gye-Rae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.6
    • /
    • pp.813-819
    • /
    • 2014
  • The purpose of this study was to extract accurate parameters of facial movement features using 3-D motion capture system in speech recognition technology through lip-reading. Instead of using the features obtained through traditional camera image, the 3-D motion system was used to obtain quantitative data for actual facial movements, and to analyze 11 variables that exhibit particular patterns such as nose, lip, jaw and cheek movements in monosyllable vocalizations. Fourteen subjects, all in 20s of age, were asked to vocalize 11 types of Korean vowel monosyllables for three times with 36 reflective markers on their faces. The obtained facial movement data were then calculated into 11 parameters and presented as patterns for each monosyllable vocalization. The parameter patterns were performed through learning and recognizing process for each monosyllable with speech recognition algorithms with Hidden Markov Model (HMM) and Viterbi algorithm. The accuracy rate of 11 monosyllables recognition was 97.2%, which suggests the possibility of voice recognition of Korean language through quantitative facial movement analysis.

Automatic Synchronization of Separately-Captured Facial Expression and Motion Data (표정과 동작 데이터의 자동 동기화 기술)

  • Jeong, Tae-Wan;Park, Sang-II
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.1
    • /
    • pp.23-28
    • /
    • 2012
  • In this paper, we present a new method for automatically synchronize captured facial expression data with its corresponding motion data. In a usual optical motion capture set-up, a detailed facial expression can not be captured simultaneously in the motion capture session because its resolution requirement is higher than that of the motion capture. Therefore, those are captured in two separate sessions and need to be synchronized in the post-process to be used for generating a convincing character animation. Based on the patterns of the actor's neck movement extracted from those two data, we present a non-linear time warping method for the automatic synchronization. We justify our method with the actual examples to show the viability of the method.

Facial region Extraction using Skin-color reference map and Motion Information (칼라 참조 맵과 움직임 정보를 이용한 얼굴영역 추출)

  • 이병석;이동규;이두수
    • Proceedings of the IEEK Conference
    • /
    • 2001.09a
    • /
    • pp.139-142
    • /
    • 2001
  • This paper presents a highly fast and accurate facial region extraction method by using the skin-color-reference map and motion information. First, we construct the robust skin-color-reference map and eliminate the background in image by this map. Additionally, we use the motion information for accurate and fast detection of facial region in image sequences. Then we further apply region growing in the remaining areas with the aid of proposed criteria. The simulation results show the improvement in execution time and accurate detection.

  • PDF

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.