• Title/Summary/Keyword: facial expression generation

Search Result 32, Processing Time 0.026 seconds

Automatic Anticipation Generation for 3D Facial Animation (3차원 얼굴 표정 애니메이션을 위한 기대효과의 자동 생성)

  • Choi Jung-Ju;Kim Dong-Sun;Lee In-Kwon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.39-48
    • /
    • 2005
  • According to traditional 2D animation techniques, anticipation makes an animation much convincing and expressive. We present an automatic method for inserting anticipation effects to an existing facial animation. Our approach assumes that an anticipatory facial expression can be found within an existing facial animation if it is long enough. Vertices of the face model are classified into a set of components using principal components analysis directly from a given hey-framed and/or motion -captured facial animation data. The vortices in a single component will have similar directions of motion in the animation. For each component, the animation is examined to find an anticipation effect for the given facial expression. One of those anticipation effects is selected as the best anticipation effect, which preserves the topology of the face model. The best anticipation effect is automatically blended with the original facial animation while preserving the continuity and the entire duration of the animation. We show experimental results for given motion-captured and key-framed facial animations. This paper deals with a part of broad subject an application of the principles of traditional 2D animation techniques to 3D animation. We show how to incorporate anticipation into 3D facial animation. Animators can produce 3D facial animation with anticipation simply by selecting the facial expression in the animation.

Estimation and Generation of Facial Expression Using Deep Learning for Art Robot (딥러닝을 활용한 예술로봇의 관객 감정 파악과 공감적 표정 생성)

  • Roh, Jinah
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2019.05a
    • /
    • pp.183-184
    • /
    • 2019
  • 본 논문에서는 로봇과 사람의 자연스러운 감정 소통을 위한 비디오 시퀀스 표정생성 대화 시스템을 제안한다. 제안된 시스템에서는 실시간 비디오 데이터로 판단된 관객의 감정 상태를 반영한 대답을 하며, 딥러닝(Deep Learning)을 활용하여 대화의 맥락에 맞는 로봇의 표정을 실시간 생성한다. 본 논문에서 관객의 표정을 위해 3만여개의 비디오 데이터로 학습한 결과 88%의 학습 정확도로 표정 생성이 가능한 것으로 확인되었다. 본 연구는 로봇 표정 생성에 딥러닝 방식을 적용한 것에 그 의의가 있으며 향후 대화 시스템 자체에도 딥러닝 방식을 확대 적용하기 위한 초석이 될 수 있다는 점에 의의가 있다.

  • PDF

Korean Emotional Speech and Facial Expression Database for Emotional Audio-Visual Speech Generation (대화 영상 생성을 위한 한국어 감정음성 및 얼굴 표정 데이터베이스)

  • Baek, Ji-Young;Kim, Sera;Lee, Seok-Pil
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.71-77
    • /
    • 2022
  • In this paper, a database is collected for extending the speech synthesis model to a model that synthesizes speech according to emotions and generating facial expressions. The database is divided into male and female data, and consists of emotional speech and facial expressions. Two professional actors of different genders speak sentences in Korean. Sentences are divided into four emotions: happiness, sadness, anger, and neutrality. Each actor plays about 3300 sentences per emotion. A total of 26468 sentences collected by filming this are not overlap and contain expression similar to the corresponding emotion. Since building a high-quality database is important for the performance of future research, the database is assessed on emotional category, intensity, and genuineness. In order to find out the accuracy according to the modality of data, the database is divided into audio-video data, audio data, and video data.

Emotion Based Gesture Animation Generation Mobile System (감정 기반 모바일 손제스쳐 애니메이션 제작 시스템)

  • Lee, Jung-Suk;Byun, Hae-Won
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.129-134
    • /
    • 2009
  • Recently, percentage of people who use SMS service is increasing. However, it is difficult to express own complicated emotion with text and emoticon of exited SMS service. This paper focuses on that point and practical uses character animation to express emotion and nuance correctly, funny. Also this paper suggests emotion based gesture animation generation system that use character's facial expression and gesture to delivery emotion excitably and clearly than only speaking. Michel[1] investigated interview movies of a person whose gesturing style they wish to animate and suggested gesture generation graph for stylized gesture animation. In this paper, we make focus to analyze and abstracted emotional gestures of Disney animation characters and did 3D modeling of these emotional gestures expanding Michel[1]'s research. To express emotion of person, suggests a emotion gesture generation graph that reflects emotion flow graph express emotion flow for probability. We investigated user reaction for research the propriety of suggested system and alternation propriety.

  • PDF

Generation of Facial Expression through Analyzing Eigen-Optical-Flows (고유광류 분석에 의한 얼굴 표정 생성)

  • 김경수;최형일
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.03a
    • /
    • pp.165-168
    • /
    • 1998
  • 얼굴을 인식하는 연구 분야는 얼굴 영상을 분석하는 과정을 거친다. 또한, 얼굴 영상 분석은 얼굴 영상을 이용하는 모든 분야의 연구에 필요한 전처리 과정이라고 할 수 있다. 그러나 얼굴 영상을 분석하는 일은 많은 비용이 든다. 본 연구에서는 이러한 분석과정을 거치지 않고 얼굴 영상을 변형한다. 입력되어지는 얼굴 영상에 나타나는 얼굴 표정을 파악하기 위하여 입력되는 데이터의 변화를 가장 잘 표현해 주는 것으로 널리 알려져 있는 고유 벡터를 이용하며, 기존의 영상을 변형한새로운 영상을 생성하기 위해서 가장 직관적으로 사용할 수 있지만, 광류 영상을 구하는 과정이 시간적으로 많은 비용을 요구하기 때문에, 본 연구에서는 일반 영상에 대한 고유 벡터와 광류 영상에 대한 교유 벡터를 이용하여 고유 벡터 공간 상의 가중치 벡터를 전달하는 방법으로 영상을 처리할 때마다 수행하여야 하는 광류 계산과정을 제거하였다.

  • PDF

A Study on the Mode of Address and Meaning Creation of Underlight in Broadcasting Lighting (방송조명에서 언더라이트의 표현 양식과 의미 창출에 관한 연구)

  • Kim, Young-Jin;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.21 no.5
    • /
    • pp.749-759
    • /
    • 2016
  • As image contents in broadcasting have been created in HDTVs and monitors have been commercialized, facial expression of objects in broadcasting lighting has become a very significant task. Figure modeling of objects in HDTVs requires smoother and cleaner video images owing to the expansion of precision of image expression by light. Lighting methods that enlighten characters in the digital generation have come to require a new change. Character modeling methods used on expression features of underlight are receiving attention for aesthetic expression of figures in HD images. Accordingly, the influence of underlight light source intensity, distance, and size on character modeling characteristics was experimentally measured and comparatively analyzed. The experiment results show that good results can be obtained only when the intensity is 17%∼25.5% in contrast to total brightness, distance is beyond 40cm, and the size is at least 20cm, to exhibit the optimum effect of underlight. This data will become material with high usage to gain smoother and cleaner images of characters in future high-quality images.

A Study on Xiao Quan's Documentary Portrait Focused on the Expression Method of (중국 사진가 샤오취안의 다큐멘터리적 초상사진에 관한 연구 : <우리들 세대>에 나타난 표현방식을 중심으로)

  • Liu, Yuan;Yang, Jong Hoon;Lee, Sang Eun
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.12
    • /
    • pp.108-117
    • /
    • 2018
  • Xiao Quan is a leading documentary portrait photographer in China. He tried to shoot portrait photographs of celebrities in the literary and artistic world. By doing this, he represented their time period. We explored the way Xiao Quan implemented the times they lived in by analyzing their portrait photographs included in . Our research showed that Xiao Quan used images of their living environments, clothes and facial expressions and composition of portraits. Such various methods of creation are a means for the symbolic expressions of their times. This research not only finds the way Chinese documentary portraits are created but also provides an opportunity to increase the value of documentary portraits as historic documents.

Development of An Interactive System Prototype Using Imitation Learning to Induce Positive Emotion (긍정감정을 유도하기 위한 모방학습을 이용한 상호작용 시스템 프로토타입 개발)

  • Oh, Chanhae;Kang, Changgu
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.4
    • /
    • pp.239-246
    • /
    • 2021
  • In the field of computer graphics and HCI, there are many studies on systems that create characters and interact naturally. Such studies have focused on the user's response to the user's behavior, and the study of the character's behavior to elicit positive emotions from the user remains a difficult problem. In this paper, we develop a prototype of an interaction system to elicit positive emotions from users according to the movement of virtual characters using artificial intelligence technology. The proposed system is divided into face recognition and motion generation of a virtual character. A depth camera is used for face recognition, and the recognized data is transferred to motion generation. We use imitation learning as a learning model. In motion generation, random actions are performed according to the first user's facial expression data, and actions that the user can elicit positive emotions are learned through continuous imitation learning.

Development of FACS-based Android Head for Emotional Expressions (감정표현을 위한 FACS 기반의 안드로이드 헤드의 개발)

  • Choi, Dongwoon;Lee, Duk-Yeon;Lee, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.537-544
    • /
    • 2020
  • This paper proposes the creation of an android robot head based on the facial action coding system(FACS), and the generation of emotional expressions by FACS. The term android robot refers to robots with human-like appearance. These robots have artificial skin and muscles. To make the expression of emotions, the location and number of artificial muscles had to be determined. Therefore, it was necessary to anatomically analyze the motions of the human face by FACS. In FACS, expressions are composed of action units(AUs), which work as the basis of determining the location and number of artificial muscles in the robots. The android head developed in this study had servo motors and wires, which corresponded to 30 artificial muscles. Moreover, the android head was equipped with artificial skin in order to make the facial expressions. Spherical joints and springs were used to develop micro-eyeball structures, and the arrangement of the 30 servo motors was based on the efficient design of wire routing. The developed android head had 30-DOFs and could express 13 basic emotions. The recognition rate of these basic emotional expressions was evaluated at an exhibition by spectators.

Design of an Intellectual Smart Mirror Appication helping Face Makeup (얼굴 메이크업을 도와주는 지능형 스마트 거울 앱의설계)

  • Oh, Sun Jin;Lee, Yoon Suk
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.5
    • /
    • pp.497-502
    • /
    • 2022
  • Information delivery among young generation has a distinct tendency to prefer visual to text as means of information distribution and sharing recently, and it is natural to distribute information through Youtube or one-man broadcasting on Internet. That is, young generation usually get their information through this kind of distribution procedure. Many young generation are also drastic and more aggressive for decorating themselves very uniquely. It tends to create personal characteristics freely through drastic expression and attempt of face makeup, hair styling and fashion coordination without distinction of sex. Especially, face makeup becomes an object of major concern among males nowadays, and female of course, then it is the major means to express their personality. In this study, to meet the demands of the times, we design and implement the intellectual smart mirror application that efficiently retrieves and recommends the related videos among Youtube or one-man broadcastings produced by famous professional makeup artists to implement the face makeup congruous with our face shape, hair color & style, skin tone, fashion color & style in order to create the face makeup that represent our characteristics. We also introduce the AI technique to provide optimal solution based on the learning of user's search patterns and facial features, and finally provide the detailed makeup face images to give the chance to get the makeup skill stage by stage.