• Title/Summary/Keyword: 얼굴 모핑

Search Result 13, Processing Time 0.014 seconds

A Study on the Implementation of Realtime Phonetic Recognition and LIP-synchronization (실시간 음성인식 및 립싱크 구현에 관한 연구)

  • Lee, H.H.;Choi, D.I.;Cho, W.Y.
    • Proceedings of the KIEE Conference
    • /
    • 2000.11d
    • /
    • pp.812-814
    • /
    • 2000
  • 본 논문에서는 실시간 음성 인식에 의한 립싱크(Lip-synchronization) 애니메이션 제공 방법에 관한 것으로서, 소정의 음성정보를 인식하여 이 음성 정보에 부합되도록 애니메이션의 입모양을 변화시켜 음성정보를 시각적으로 전달하도록 하는 립싱크 방법에 대한 연구이다. 인간의 실제 발음 모습에 보다 유사한 립싱크와 생동감 있는 캐릭터의 얼굴 형태를 실시간으로 표현할 수 있도록 마이크 등의 입력을 받고 신경망을 이용하여 실시간으로 음성을 인식하고 인식된 결과에 따라 2차원 애니메이션을 모핑 하도록 모델을 상고 있다.

  • PDF

A Real-time Interactive Shadow Avatar with Facial Emotions (감정 표현이 가능한 실시간 반응형 그림자 아바타)

  • Lim, Yang-Mi;Lee, Jae-Won;Hong, Euy-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.4
    • /
    • pp.506-515
    • /
    • 2007
  • In this paper, we propose a Real-time Interactive Shadow Avatar(RISA) which can express facial emotions changing as response of user's gestures. The avatar's shape is a virtual Shadow constructed from the real-time sampled picture of user's shape. Several predefined facial animations overlap on the face area of the virtual Shadow, according to the types of hand gestures. We use the background subtraction method to separate the virtual Shadow, and a simplified region-based tracking method is adopted for tracking hand positions and detecting hand gestures. In order to express smooth change of emotions, we use a refined morphing method which uses many more frames in contrast with traditional dynamic emoticons. RISA can be directly applied to the area of interface media arts and we expect the detecting scheme of RISA would be utilized as an alternative media interface for DMB and camera phones which need simple input devices, in the near future.

  • PDF

A New Face Morphing Method using Texture Feature-based Control Point Selection Algorithm and Parallel Deep Convolutional Neural Network (텍스처 특징 기반 제어점 선택 알고리즘과 병렬 심층 컨볼루션 신경망을 이용한 새로운 얼굴 모핑 방법)

  • Park, Jin Hyeok;Khan, Rafiul Hasan;Lim, Seon-Ja;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.176-188
    • /
    • 2022
  • In this paper, we propose a compact method for anthropomorphism that uses Deep Convolutional Neural Networks (DCNN) to detect the similarities between a human face and an animal face. We also apply texture feature-based morphing between them. We propose a basic texture feature-based morphing system for morphing between human faces only. The entire anthropomorphism process starts with the creation of an animal face classifier using a parallel DCNN that determines the most similar animal face to a given human face. The significance of our network is that it contains four sets of convolutional functions that run in parallel, allowing it to extract more features than a linear DCNN network. Our employed texture feature algorithm-based automatic morphing system recognizes the facial features of the human face and takes the Control Points automatically, rather than the traditional human aiding manual morphing system, once the similarity was established. The simulation results show that our suggested DCNN surpasses its competitors with a 92.0% accuracy rate. It also ensures that the most similar animal classes are found, and the texture-based morphing technology automatically completes the morphing process, ensuring a smooth transition from one image to another.