• Title/Summary/Keyword: Facial motion

Search Result 157, Processing Time 0.028 seconds

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

Real-time Markerless Facial Motion Capture of Personalized 3D Real Human Research

  • Hou, Zheng-Dong;Kim, Ki-Hong;Lee, David-Junesok;Zhang, Gao-He
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.129-135
    • /
    • 2022
  • Real human digital models appear more and more frequently in VR/AR application scenarios, in which real-time markerless face capture animation of personalized virtual human faces is an important research topic. The traditional way to achieve personalized real human facial animation requires multiple mature animation staff, and in practice, the complex process and difficult technology may bring obstacles to inexperienced users. This paper proposes a new process to solve this kind of work, which has the advantages of low cost and less time than the traditional production method. For the personalized real human face model obtained by 3D reconstruction technology, first, use R3ds Wrap to topology the model, then use Avatary to make 52 Blend-Shape model files suitable for AR-Kit, and finally realize real-time markerless face capture 3D real human on the UE4 platform facial motion capture, this study makes rational use of the advantages of software and proposes a more efficient workflow for real-time markerless facial motion capture of personalized 3D real human models, The process ideas proposed in this paper can be helpful for other scholars who study this kind of work.

A Study on Effective Facial Expression of 3D Character through Variation of Emotions (Model using Facial Anatomy) (감정변화에 따른 3D캐릭터의 표정연출에 관한 연구 (해부학적 구조 중심으로))

  • Kim, Ji-Ae
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.7
    • /
    • pp.894-903
    • /
    • 2006
  • Rapid technology growth of hardware have brought about development and expansion of various digital motion pictured information including 3-Dimension. 3D digital techniques can be used to be diversity in Animation, Virtual-Reality, Movie, Advertisement, Game and so on. 3D characters in digital motion picture take charge of the core as to communicate emotions and information to users through sounds, facial expression and characteristic motions. Concerns about 3D motion and facial expression is getting higher with extension of frequency in use and range about 3D character design. In this study, the facial expression can be used as a effective method about implicit emotions will be studied and research 3D character's facial expressions and muscles movement which are based on human anatomy and then try to find effective method of facial expression. Finally, also, study the difference and distinguishing between 2D and 3D character through the preceding study what I have researched before.

  • PDF

Direct Retargeting Method from Facial Capture Data to Facial Rig (페이셜 리그에 대한 페이셜 캡처 데이터의 다이렉트 리타겟팅 방법)

  • Cho, Hyunjoo;Lee, Jeeho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.2
    • /
    • pp.11-19
    • /
    • 2016
  • This paper proposes a method to directly retarget facial motion capture data to the facial rig. Facial rig is an essential tool in the production pipeline, which allows helping the artist to create facial animation. The direct mapping method from the motion capture data to the facial rig provides great convenience because artists are already familiar with the use of a facial rig and the direct mapping produces the mapping results that are ready for the artist's follow-up editing process. However, mapping the motion data into a facial rig is not a trivial task because a facial rig typically has a variety of structures, and therefore it is hard to devise a generalized mapping method for various facial rigs. In this paper, we propose a data-driven approach to the robust mapping from motion capture data to an arbitary facial rig. The results show that our method is intuitive and leads to increased productivity in the creation of facial animation. We also show that our method can retarget the expression successfully to non-human characters which have a very different shape of face from that of human.

Real-time Facial Modeling and Animation based on High Resolution Capture (고해상도 캡쳐 기반 실시간 얼굴 모델링과 표정 애니메이션)

  • Byun, Hae-Won
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.8
    • /
    • pp.1138-1145
    • /
    • 2008
  • Recently, performance-driven facial animation has been popular in various area. In television or game, it is important to guarantee real-time animation for various characters with different appearances between a performer and a character. In this paper, we present a new facial animation approach based on motion capture. For this purpose, we address three issues: facial expression capture, expression mapping and facial animation. Finally, we show the results of various examination for different types of face models.

  • PDF

Estimation and Watermarking of Motion Parameters in Model Based Image Coding

  • Park, Min-Chul
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1264-1267
    • /
    • 2002
  • In order to achieve an advanced human-computer interface system, it is necessary to analyze and synthesize facial motions just as they are in an interactive way, and to protect them from unwanted use and/or illegal use for their privacy, various uses in applications and the costs of obtaining motion parameters. To estimate facial motion, a method of using skin color distribution, luminance, and geometrical information of a face is employed. Digital watermarks are embedded into facial motion parameters and then these parameters are scrambled so that it cannot be understood. Experimental results show high accuracy and efficiency of the proposed estimation method and the usefulness of the proposed watermarking method.

  • PDF

Interactive Facial Expression Animation of Motion Data using Sammon's Mapping (Sammon 매핑을 사용한 모션 데이터의 대화식 표정 애니메이션)

  • Kim, Sung-Ho
    • The KIPS Transactions:PartA
    • /
    • v.11A no.2
    • /
    • pp.189-194
    • /
    • 2004
  • This paper describes method to distribute much high-dimensional facial expression motion data to 2 dimensional space, and method to create facial expression animation by select expressions that want by realtime as animator navigates this space. In this paper composed expression space using about 2400 facial expression frames. The creation of facial space is ended by decision of shortest distance between any two expressions. The expression space as manifold space expresses approximately distance between two points as following. After define expression state vector that express state of each expression using distance matrix which represent distance between any markers, if two expression adjoin, regard this as approximate about shortest distance between two expressions. So, if adjacency distance is decided between adjacency expressions, connect these adjacency distances and yield shortest distance between any two expression states, use Floyd algorithm for this. To materialize expression space that is high-dimensional space, project on 2 dimensions using Sammon's Mapping. Facial animation create by realtime with animators navigating 2 dimensional space using user interface.

Motion Pattern Detection for Dynamic Facial Expression Understanding

  • Mizoguchi, Hiroshi;Hiramatsu, Seiyo;Hiraoka, Kazuyuki;Tanaka, Masaru;Shigehara, Takaomi;Mishima, Taketoshi
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1760-1763
    • /
    • 2002
  • In this paper the authors present their attempt io realize a motion pattern detector that finds specified sequence of image from input motion image. The detector is intended to be used for time-varying facial expression understanding. Needless to say, facial expression understanding by machine is crucial and enriches quality of human machine interaction. Among various facial expressions, like blinking, there must be such expressions that can not be recognized if input expression image is static. Still image of blinking can not be distinguished from sleeping. In this paper, the authors discuss implementation of their motion pattern detector and describe experiments using the detector. Experimental results confirm the feasibility of the idea behind the implemented detector.

  • PDF

Robust Extraction of Heartbeat Signals from Mobile Facial Videos (모바일 얼굴 비디오로부터 심박 신호의 강건한 추출)

  • Lomaliza, Jean-Pierre;Park, Hanhoon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.1
    • /
    • pp.51-56
    • /
    • 2019
  • This paper proposes an improved heartbeat signal extraction method for ballistocardiography(BCG)-based heart-rate measurement on mobile environment. First, from a mobile facial video, a handshake-free head motion signal is extracted by tracking facial features and background features at the same time. Then, a novel signal periodicity computation method is proposed to accurately separate out the heartbeat signal from the head motion signal. The proposed method could robustly extract heartbeat signals from mobile facial videos, and enabled more accurate heart rate measurement (measurement errors were reduced by 3-4 bpm) compared to the existing method.

Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding (LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.117-124
    • /
    • 2007
  • This paper describes methodology that enables animators to create the facial expression animations and to control the facial expressions in real-time by reusing motion capture datas. In order to achieve this, we fix a facial expression state expression method to express facial states based on facial motion data. In addition, by distributing facial expressions into intuitive space using LLE algorithm, it is possible to create the animations or to control the expressions in real-time from facial expression space using user interface. In this paper, approximately 2400 facial expression frames are used to generate facial expression space. In addition, by navigating facial expression space projected on the 2-dimensional plane, it is possible to create the animations or to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In order to distribute approximately 2400 facial expression data into intuitional space, there is need to represents the state of each expressions from facial expression frames. In order to achieve this, the distance matrix that presents the distances between pairs of feature points on the faces, is used. In order to distribute this datas, LLE algorithm is used for visualization in 2-dimensional plane. Animators are told to control facial expressions or to create animations when using the user interface of this system. This paper evaluates the results of the experiment.