• 제목/요약/키워드: realistic facial expressions

검색결과 18건 처리시간 0.028초

Facial Actions 과 애니메이션 원리에 기반한 로봇의 얼굴 제스처 생성 (Generation of Robot Facial Gestures based on Facial Actions and Animation Principles)

  • 박정우;김우현;이원형;이희승;정명진
    • 제어로봇시스템학회논문지
    • /
    • 제20권5호
    • /
    • pp.495-502
    • /
    • 2014
  • This paper proposes a method to generate diverse robot facial expressions and facial gestures in order to help long-term HRI. First, nine basic dynamics for diverse robot facial expressions are determined based on the dynamics of human facial expressions and principles of animation for even identical emotions. In the second stage, facial actions are added to express facial gestures such as sniffling or wailing loudly corresponding to sadness, laughing aloud or smiling corresponding to happiness, etc. To evaluate the effectiveness of our approach, we compared the facial expressions of the developed robot when the proposed method is used or not. The results of the survey showed that the proposed method can help robots generate more realistic facial expressions.

Text-driven Speech Animation with Emotion Control

  • Chae, Wonseok;Kim, Yejin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권8호
    • /
    • pp.3473-3487
    • /
    • 2020
  • In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.

카데바 자료를 이용한 얼굴근육의 해부학적 기능 학습을 위한 삼차원 교육 콘텐츠 제작과 관련된 융합 연구 (Convergence Study on the Three-dimensional Educational Model of the Functional Anatomy of Facial Muscles Based on Cadaveric Data)

  • 이재기
    • 한국융합학회논문지
    • /
    • 제12권9호
    • /
    • pp.57-63
    • /
    • 2021
  • 이 연구는 한국인 성인 시신의 얼굴근육을 해부하고 삼차원 스캔하여, 사실적인 얼굴근육의 형태를 삼차원 오브젝트를 만들고, 이를 통해 표정을 재현하여 카데바 얼굴근육의 복합적인 움직임을 삼차원적으로 관찰 가능한 교육 자료를 제작하는데 목적이 있다. 카데바 해부 사진 자료를 이용하여, 얼굴근육에 대해 삼차원 모델링 하였고, 네 가지 표정(슬픔, 미소, 놀람, 분노)에 따라 얼굴근육이 변화하는 삼차원 영상을 제작하였다. 이를 통해, 삼차원으로 구현한 카데바 얼굴근육의 복합적인 작용과 다양한 표정 변화를 확인할 수 있었다. 이 연구결과는 얼굴근육의 개별적인 기능에 대한 정량적인 자료를 확인할 수는 없지만, 사실적이고 입체적인 카데바의 얼굴근육 형태를 관찰할 수 있고, 복합적인 얼굴근육의 작용으로 인한 표정 변화를 확인할 수 있다. 이러한 자료는 얼굴근육의 해부학적 교육 자료로 활용할 수 있을 것으로 기대한다.

비선형 피부색 변화 모델을 이용한 실감적인 표정 합성 (Synthesis of Realistic Facial Expression using a Nonlinear Model for Skin Color Change)

  • 이정호;박현;문영식
    • 전자공학회논문지CI
    • /
    • 제43권3호
    • /
    • pp.67-75
    • /
    • 2006
  • 얼굴의 표정은 얼굴의 구성요소같은 기하학적 정보와 조명이나 주름 같은 세부적인 정보들로 표현된다. 얼굴 표정은 기하학적 변형만으로는 실감적인 표정을 생성하기 힘들기 때문에 기하학적 변형과 더불어 텍스처 같은 세부적인 정보도 함께 변형해야만 실감적인 표현을 할 수 있다. 표정비율이미지 (Expression Ratio Image)같은 얼굴 텍스처의 세부적인 정보를 변형하기 위한 기존 방법들은 조명에 따른 피부색의 변화를 정확히 표현할 수 없는 단점이 있다. 따라서 본 논문에서는 이러한 문제를 해결하기 위해 서로 다른 조명 조건에서도 실감적인 표정 텍스처 정보를 적용할 수 있는 비선형 피부색 모델 기반의 표정 합성방법을 제안한다. 제안된 방법은 동적 외양 모델을 이용한 자동적인 얼굴 특징 추출과 와핑을 통한 표정 변형 단계, 비선형 피부색 변화 모델을 이용한 표정 생성 단계, 유클리디 거리 변환 (Euclidean Distance Transform)에 의해 계산된 혼합 비율을 사용한 원본 얼굴 영상과 생성된 표정의 합성 등 총 3 단계로 구성된다. 실험결과는 제안된 방법이 다양한 조명조건에서도 자연스럽고 실감적인 표정을 표현한다는 것을 보인다.

Facial Expression Explorer for Realistic Character Animation

  • Ko, Hee-Dong;Park, Moon-Ho
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 1998년도 Proceedings of International Workshop on Advanced Image Technology
    • /
    • pp.16.1-164
    • /
    • 1998
  • This paper describes Facial Expression Explorer to search for the components of a facial expression and to map the expression to other expressionless figures like a robot, frog, teapot, rabbit and others. In general, it is a time-consuming and laborious job to create a facial expression manually, especially when the facial expression must personify a well-known public figure or an actor. In order to extract a blending ratio from facial images automatically, the Facial Expression Explorer uses Networked Genetic Algorithm(NGA) which is a fast method for the convergence by GA. The blending ratio is often used to create facial expressions through shape blending methods by animators. With the Facial Expression Explorer a realistic facial expression can be modeled more efficiently.

표정 인식을 이용한 3D 감정 아바타 생성 및 애니메이션 (3D Emotional Avatar Creation and Animation using Facial Expression Recognition)

  • 조태훈;정중필;최수미
    • 한국멀티미디어학회논문지
    • /
    • 제17권9호
    • /
    • pp.1076-1083
    • /
    • 2014
  • We propose an emotional facial avatar that portrays the user's facial expressions with an emotional emphasis, while achieving visual and behavioral realism. This is achieved by unifying automatic analysis of facial expressions and animation of realistic 3D faces with details such as facial hair and hairstyles. To augment facial appearance according to the user's emotions, we use emotional templates representing typical emotions in an artistic way, which can be easily combined with the skin texture of the 3D face at runtime. Hence, our interface gives the user vision-based control over facial animation of the emotional avatar, easily changing its moods.

Image-based Realistic Facial Expression Animation

  • Yang, Hyun-S.;Han, Tae-Woo;Lee, Ju-Ho
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 1999년도 KOBA 방송기술 워크샵 KOBA Broadcasting Technology Workshop
    • /
    • pp.133-140
    • /
    • 1999
  • In this paper, we propose a method of image-based three-dimensional modeling for realistic facial expression. In the proposed method, real human facial images are used to deform a generic three-dimensional mesh model and the deformed model is animated to generate facial expression animation. First, we take several pictures of the same person from several view angles. Then we project a three-dimensional face model onto the plane of each facial image and match the projected model with each image. The results are combined to generate a deformed three-dimensional model. We use the feature-based image metamorphosis to match the projected models with images. We then create a synthetic image from the two-dimensional images of a specific person's face. This synthetic image is texture-mapped to the cylindrical projection of the three-dimensional model. We also propose a muscle-based animation technique to generate realistic facial expression animations. This method facilitates the control of the animation. lastly, we show the animation results of the six represenative facial expressions.

Comparative Analysis of Facial Animation Production by Digital Actors - Keyframe Animation and Mobile Capture Animation

  • Choi, Chul Young
    • International journal of advanced smart convergence
    • /
    • 제13권3호
    • /
    • pp.176-182
    • /
    • 2024
  • Looking at the recent game market, classic games released in the past are being re-released with high-quality visuals, and users are generally satisfied. It can be said that the realization of realistic digital actors, which was not possible in the past, is now becoming a reality. Epic Games launched the MetaHuman Creator website in September 2021, allowing anyone to easily create realistic human characters. Since then, the number of animations created using MetaHumans has been increasing. As the characters become more realistic, the movement and expression animations expected by the audience must also be convincingly realized. Until recently, traditional methods were the primary approach for producing realistic character animations. For facial animation, Epic Games introduced an improved method on the Live Link app in 2023, which provides the highest quality among mobile-based techniques. In this context, this paper compares the results of animation produced using both keyframe facial capture and mobile-based capture. After creating an emotional expression animation with four sentences, the results were compared using Unreal Engine. While the facial capture method is more natural and easier to use, the precise and exaggerated expressions possible with the keyframe method cannot be overlooked, suggesting that a hybrid approach using both methods will likely continue for the foreseeable future.

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제2권2호
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

Putting Your Best Face Forward: Development of a Database for Digitized Human Facial Expression Animation

  • Lee, Ning-Sung;Alia Reid Zhang Yu;Edmond C. Prakash;Tony K.Y Chan;Edmund M-K. Lai
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.153.6-153
    • /
    • 2001
  • 3-Dimentional 3D digitization of the human is a technology that is still relatively new. There are present uses such as radiotherapy, identification systems and commercial uses and potential future applications. In this paper, we analyzed and experimented to determine the easiest and most efficient method, which would give us the most accurate results. We also constructed a database of realistic expressions and high quality human heads. We scanned people´s heads and facial expressions in 3D using a Minolta Vivid 700 scanner, then edited the models obtained on a Silicon Graphics workstation. Research was done into the present and potential uses of the 3D digitized models of the human head and we develop ideas for ...

  • PDF