• Title/Summary/Keyword: Facial capture

Search Result 64, Processing Time 0.019 seconds

A Study of Facial Expression of Digital Character with Muscle Simulation System

  • He, Yangyang;Choi, Chul-young
    • International journal of advanced smart convergence
    • /
    • v.8 no.2
    • /
    • pp.162-169
    • /
    • 2019
  • Facial rigging technology has been developing more and more since the 21st century. Facial rigging of various methods is still attempted and a technique of capturing the geometry in real time recently also appears. Currently Modern CG is produced image which is hard to distinguish from actual photograph. However, this kind of technology still requires a lot of equipment and cost. The purpose of this study is to perform facial rigging using muscle simulation instead of using such equipment. Original muscle simulations were made primarily for use in the body of a creature. In this study, however, we use muscle simulations for facial rigging to create a more realistic creature-like effect. To do this, we used Ziva Dynamics' Ziva VFX muscle simulation software. We also develop a method to overcome the disadvantages of muscle simulation. Muscle simulation can not be applied in real time and it takes time to simulate. It also takes a long time to work because the complex muscles must be connected. Our study have solved this problem using blendshape and we want to show you how to apply our method to face rig.

Comparative Analysis of Linear and Nonlinear Projection Techniques for the Best Visualization of Facial Expression Data (얼굴 표정 데이터의 최적의 가시화를 위한 선형 및 비선형 투영 기법의 비교 분석)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.9
    • /
    • pp.97-104
    • /
    • 2009
  • This paper describes comparison and analysis of methodology which enables us in order to search the projection technique of optimum for projection in the plane. For this methodology, we applies the high-dimensional facial motion capture data respectively in linear and nonlinear projection techniques. The one core element of the methodology is to applies the high-dimensional facial expression data of frame unit in PCA where is a linear projection technique and Isomap, MDS, CCA, Sammon's Mapping and LLE where are a nonlinear projection techniques. And another is to find out the methodology which distributes in this low-dimensional space, and analyze the result last. For this goal, we calculate the distance between the high-dimensional facial expression frame data of existing. And we distribute it in two-dimensional plane space to maintain the distance relationship between the high-dimensional facial expression frame data of existing like that from the condition which applies linear and nonlinear projection techniques. When comparing the facial expression data which distribute in two-dimensional space and the data of existing, we find out the projection technique to maintain the relationship of distance between the frame data like that in condition of optimum. Finally, this paper compare linear and nonlinear projection techniques to projection high-dimensional facial expression data in low-dimensional space and analyze it. And we find out the projection technique of optimum from it.

Realtime Facial Expression Control of 3D Avatar by PCA Projection of Motion Data (모션 데이터의 PCA투영에 의한 3차원 아바타의 실시간 표정 제어)

  • Kim Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1478-1484
    • /
    • 2004
  • This paper presents a method that controls facial expression in realtime of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. The space of expression is created from about 2400 frames of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. Facial expression of 3D avatar is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the Principal Component Analysis(PCA) projection. To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. This paper evaluates the results.

  • PDF

Facial Characteristic Point Extraction for Representation of Facial Expression (얼굴 표정 표현을 위한 얼굴 특징점 추출)

  • Oh, Jeong-Su;Kim, Jin-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.1
    • /
    • pp.117-122
    • /
    • 2005
  • This paper proposes an algorithm for Facial Characteristic Point(FCP) extraction. The FCP plays an important role in expression representation for face animation, avatar mimic or facial expression recognition. Conventional algorithms extract the FCP with an expensive motion capture device or by using markers, which give an inconvenience or a psychological load to experimental person. However, the proposed algorithm solves the problems by using only image processing. For the efficient FCP extraction, we analyze and improve the conventional algorithms detecting facial components, which are basis of the FCP extraction.

Performance Evaluation Method for Detection Algorithms of Face Region and Facial Components (얼굴영역 및 얼굴요소 검출 알고리즘의 성능평가 방법)

  • Park, Kwang-Hyun;Kim, Dae-Jin;Hong, Ji- Man;Jeong, Young-Sook;Choi, Byoung-Wook
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.3
    • /
    • pp.192-200
    • /
    • 2009
  • In this paper, we report the progress in the development of performance evaluation method for detection algorithms of face region and facial components. This paper aims to provide a standardized evaluation method for general approach in face recognition application as a potential component in futuristic intelligent robot systems. From an image capture process to the retrieval of face-related information, all the necessary steps are shown with examples.

  • PDF

Interactive Facial Expression Animation of Motion Data using Sammon's Mapping (Sammon 매핑을 사용한 모션 데이터의 대화식 표정 애니메이션)

  • Kim, Sung-Ho
    • The KIPS Transactions:PartA
    • /
    • v.11A no.2
    • /
    • pp.189-194
    • /
    • 2004
  • This paper describes method to distribute much high-dimensional facial expression motion data to 2 dimensional space, and method to create facial expression animation by select expressions that want by realtime as animator navigates this space. In this paper composed expression space using about 2400 facial expression frames. The creation of facial space is ended by decision of shortest distance between any two expressions. The expression space as manifold space expresses approximately distance between two points as following. After define expression state vector that express state of each expression using distance matrix which represent distance between any markers, if two expression adjoin, regard this as approximate about shortest distance between two expressions. So, if adjacency distance is decided between adjacency expressions, connect these adjacency distances and yield shortest distance between any two expression states, use Floyd algorithm for this. To materialize expression space that is high-dimensional space, project on 2 dimensions using Sammon's Mapping. Facial animation create by realtime with animators navigating 2 dimensional space using user interface.

3D Volumetric Capture-based Dynamic Face Production for Hyper-Realistic Metahuman (극사실적 메타휴먼을 위한 3D 볼류메트릭 캡쳐 기반의 동적 페이스 제작)

  • Oh, Moon-Seok;Han, Gyu-Hoon;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.751-761
    • /
    • 2022
  • With the development of digital graphics technology, the metaverse has become a significant trend in the content market. The demand for technology that generates high-quality 3D (dimension) models is rapidly increasing. Accordingly, various technical attempts are being made to create high-quality 3D virtual humans represented by digital humans. 3D volumetric capture is spotlighted as a technology that can create a 3D manikin faster and more precisely than the existing 3D model creation method. In this study, we try to analyze 3D high-precision facial production technology based on practical cases of the difficulties in content production and technologies applied in volumetric 3D and 4D model creation. Based on the actual model implementation case through 3D volumetric capture, we considered techniques for 3D virtual human face production and producted a new metahuman using a graphics pipeline for an efficient human facial generation.

Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding (LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.117-124
    • /
    • 2007
  • This paper describes methodology that enables animators to create the facial expression animations and to control the facial expressions in real-time by reusing motion capture datas. In order to achieve this, we fix a facial expression state expression method to express facial states based on facial motion data. In addition, by distributing facial expressions into intuitive space using LLE algorithm, it is possible to create the animations or to control the expressions in real-time from facial expression space using user interface. In this paper, approximately 2400 facial expression frames are used to generate facial expression space. In addition, by navigating facial expression space projected on the 2-dimensional plane, it is possible to create the animations or to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In order to distribute approximately 2400 facial expression data into intuitional space, there is need to represents the state of each expressions from facial expression frames. In order to achieve this, the distance matrix that presents the distances between pairs of feature points on the faces, is used. In order to distribute this datas, LLE algorithm is used for visualization in 2-dimensional plane. Animators are told to control facial expressions or to create animations when using the user interface of this system. This paper evaluates the results of the experiment.

Detection of Face-element for Facial Analysis (표정분석을 위한 얼굴 구성 요소 검출)

  • 이철희;문성룡
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.131-136
    • /
    • 2004
  • According to development of media, various information is recorded in media, expression is one during interesting information. Because expression includes of relationship of human inside. Intention of inside is expressed by gesture, but expression has more information. And, expression can manufacture voluntarily, include plan of inside on the man. Also, expression has unique character in a person, have alliance that do division possibility. In this paper, to analyze expression of USB camera animation, wish to detect facial building block. Because characteristic point by person's expression change exists on face component. For component detection, in animation one frame with Capture, grasp facial position, and separate face area, and detect characteristic points of face component.

An Explainable Deep Learning-Based Classification Method for Facial Image Quality Assessment

  • Kuldeep Gurjar;Surjeet Kumar;Arnav Bhavsar;Kotiba Hamad;Yang-Sae Moon;Dae Ho Yoon
    • Journal of Information Processing Systems
    • /
    • v.20 no.4
    • /
    • pp.558-573
    • /
    • 2024
  • Considering factors such as illumination, camera quality variations, and background-specific variations, identifying a face using a smartphone-based facial image capture application is challenging. Face Image Quality Assessment refers to the process of taking a face image as input and producing some form of "quality" estimate as an output. Typically, quality assessment techniques use deep learning methods to categorize images. The models used in deep learning are shown as black boxes. This raises the question of the trustworthiness of the models. Several explainability techniques have gained importance in building this trust. Explainability techniques provide visual evidence of the active regions within an image on which the deep learning model makes a prediction. Here, we developed a technique for reliable prediction of facial images before medical analysis and security operations. A combination of gradient-weighted class activation mapping and local interpretable model-agnostic explanations were used to explain the model. This approach has been implemented in the preselection of facial images for skin feature extraction, which is important in critical medical science applications. We demonstrate that the use of combined explanations provides better visual explanations for the model, where both the saliency map and perturbation-based explainability techniques verify predictions.