• Title/Summary/Keyword: 립싱크

Search Result 21, Processing Time 0.033 seconds

A Study on RTP-based Lip Synchronization Control for Very Low Delay in Video Communication (초저지연 비디오 통신을 위한 RTP 기반 립싱크 제어 기술에 관한 연구)

  • Kim, Byoung-Yong;Lee, Dong-Jin;Kwon, Jae-Cheol;Sim, Dong-Gyu
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.8
    • /
    • pp.1039-1051
    • /
    • 2007
  • In this paper, a new lip synchronization control method is proposed to achieve very low delay in the video communication. The lip control is so much vital in video communication as delay reduction. In a general way, to control the lip synchronization, both the playtime and capture time calculated from RTP time stamp are used. RTP timestamp is created by stream sender and sent to the receiver along the stream. It is extracted from the received packet by stream receiver to calculate playtime and capture time. In this paper, we propose the method of searching most adjacent corresponding frame of the audio signal, which is assumed to be played with uniform speed. Encoding buffer of stream sender is removed to reduce the buffering delay. Besides, decoder buffer of receiver, which is used to correct the cracked packet, is resulted to process only 3 frames. These mechanisms enable us to achieve ultra low delay less than 100 ms, which is essential to video communication. Through simulations, the proposed method shows below the 100 ms delay and controlled the lip synchronization between audio and video.

  • PDF

Natural 3D Lip-Synch Animation Based on Korean Phonemic Data (한국어 음소를 이용한 자연스러운 3D 립싱크 애니메이션)

  • Jung, Il-Hong;Kim, Eun-Ji
    • Journal of Digital Contents Society
    • /
    • v.9 no.2
    • /
    • pp.331-339
    • /
    • 2008
  • This paper presents the development of certain highly efficient and accurate system for producing animation key data for 3D lip-synch animation. The system developed herein extracts korean phonemes from sound and text data automatically and then computes animation key data using the segmented phonemes. This animation key data is used for 3D lip-synch animation system developed herein as well as commercial 3D facial animation system. The conventional 3D lip-synch animation system segments the sound data into the phonemes based on English phonemic system and produces the lip-synch animation key data using the segmented phoneme. A drawback to this method is that it produces the unnatural animation for Korean contents. Another problem is that this method needs the manual supplementary work. In this paper, we propose the 3D lip-synch animation system that can segment the sound and text data into the phonemes automatically based on Korean phonemic system and produce the natural lip-synch animation using the segmented phonemes.

  • PDF

Human-like Fuzzy Lip Synchronization of 3D Facial Model Based on Speech Speed (발화속도를 고려한 3차원 얼굴 모형의 퍼지 모델 기반 립싱크 구현)

  • Park Jong-Ryul;Choi Cheol-Wan;Park Min-Yong
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.416-419
    • /
    • 2006
  • 본 논문에서는 음성 속도를 고려한 새로운 립싱크 방법에 대해서 제안한다. 실험을 통해 구축한 데이터베이스로부터 음성속도와 입모양 및 크기와의 관계를 퍼지 알고리즘을 이용하여 정립하였다. 기존 립싱크 방법은 음성 속도를 고려하지 않기 때문에 말의 속도와 상관없이 일정한 입술의 모양과 크기를 보여준다. 본 논문에서 제안한 방법은 음성 속도와 입술 모양의 관계를 적용하여 보다 인간에 근접한 립싱크의 구현이 가능하다. 또한 퍼지 이론을 사용함으로써 수치적으로 정확하게 표현할 수 없는 애매한 입 크기와 모양의 변화를 모델링 할 수 있다. 이를 증명하기 위해 제안된 립싱크 알고리즘과 기존의 방법을 비교하고 3차원 그래픽 플랫폼을 제작하여 실제 응용 프로그램에 적용한다.

  • PDF

A Study on Korean Lip-Sync for Animation Characters - Based on Lip-Sync Technique in English-Speaking Animations (애니메이션 캐릭터의 한국어 립싱크 연구 : 영어권 애니메이션의 립싱크 기법을 기반으로)

  • Kim, Tak-Hoon
    • Cartoon and Animation Studies
    • /
    • s.13
    • /
    • pp.97-114
    • /
    • 2008
  • This study aims to study mouth shapes suitable to the shapes of Korean consonants and vowels for Korean animations by analyzing the process of English-speaking animation lip-sync based on pre-recording in the United States. A research was conducted to help character animators understand the concept of Korean lip-sync which is done after recording and to introduce minimum, basic mouth shapes required for Korean expressions which can be applied to various characters. In the introduction, this study mentioned the necessity of Korean lip-sync in local animations and introduced the research methods of Korean lip-sync data based on English lip-sync data by laking an American production as an example. In the main subject, this study demonstrated the characteristics and roles of 8 basic mouth shapes required for English pronunciation expressions, left out mouth shapes that are required for English expressions but not for Korean expressions, and in contrast, added mouth shapes required for Korean expressions but not for English expressions. Based on these results, this study made a diagram for the mouth shapes of Korean expressions by laking various examples and made a research on how mouth shapes vary when they are used as consonants, vowels and batchim. In audition, the case study proposed a method to transfer lines to the exposure sheet and a method to arrange mouth shapes according to lip-sync for practical animation production. However, lines from a Korean movie were inevitably used as an example because there has not been any precedents in Korea about animation production with systematic Korean lip-sync data.

  • PDF

Development of Automatic Lip-sync MAYA Plug-in for 3D Characters (3D 캐릭터에서의 자동 립싱크 MAYA 플러그인 개발)

  • Lee, Sang-Woo;Shin, Sung-Wook;Chung, Sung-Taek
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.3
    • /
    • pp.127-134
    • /
    • 2018
  • In this paper, we have developed the Auto Lip-Sync Maya plug-in for extracting Korean phonemes from voice data and text information based on Korean and produce high quality 3D lip-sync animation using divided phonemes. In the developed system, phoneme separation was classified into 8 vowels and 13 consonants used in Korean, referring to 49 phonemes provided by Microsoft Speech API engine SAPI. In addition, the pronunciation of vowels and consonants has variety Mouth Shapes, but the same Viseme can be applied to some identical ones. Based on this, we have developed Auto Lip-sync Maya Plug-in based on Python to enable lip-sync animation to be implemented automatically at once.

DTV Lip-Sync Test Using Embedded Audio-Video Time Indexed Signals (숨겨진 오디오 비디오 시간 인덱스 신호를 사용한 DTV 립싱크 테스트)

  • 한찬호;송규익
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.155-162
    • /
    • 2004
  • This paper concentrated on lip synchronization (lip sync) test for DTV with respect to audio and video signals using a finite digital bitstream In this paper, we propose a new lip sync test method which does not effect on the current program by use of the transient effect area test signals (TATS) and audio-video time index lip sync test signals (TILS).the experimental result shows that the time difference between audio and video signal can be easily measured by captured oscilloscope waveform at any time.

A Study on the Implementation of Realtime Phonetic Recognition and LIP-synchronization (실시간 음성인식 및 립싱크 구현에 관한 연구)

  • Lee, H.H.;Choi, D.I.;Cho, W.Y.
    • Proceedings of the KIEE Conference
    • /
    • 2000.11d
    • /
    • pp.812-814
    • /
    • 2000
  • 본 논문에서는 실시간 음성 인식에 의한 립싱크(Lip-synchronization) 애니메이션 제공 방법에 관한 것으로서, 소정의 음성정보를 인식하여 이 음성 정보에 부합되도록 애니메이션의 입모양을 변화시켜 음성정보를 시각적으로 전달하도록 하는 립싱크 방법에 대한 연구이다. 인간의 실제 발음 모습에 보다 유사한 립싱크와 생동감 있는 캐릭터의 얼굴 형태를 실시간으로 표현할 수 있도록 마이크 등의 입력을 받고 신경망을 이용하여 실시간으로 음성을 인식하고 인식된 결과에 따라 2차원 애니메이션을 모핑 하도록 모델을 상고 있다.

  • PDF

Development of a Lipsync Algorithm Based on Audio-visual Corpus (시청각 코퍼스 기반의 립싱크 알고리듬 개발)

  • 김진영;하영민;이화숙
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.63-69
    • /
    • 2001
  • A corpus-based lip sync algorithm for synthesizing natural face animation is proposed in this paper. To get the lip parameters, some marks were attached some marks to the speaker's face, and the marks' positions were extracted with some Image processing methods. Also, the spoken utterances were labeled with HTK and prosodic information (duration, pitch and intensity) were analyzed. An audio-visual corpus was constructed by combining the speech and image information. The basic unit used in our approach is syllable unit. Based on this Audio-visual corpus, lip information represented by mark's positions was synthesized. That is. the best syllable units are selected from the audio-visual corpus and each visual information of selected syllable units are concatenated. There are two processes to obtain the best units. One is to select the N-best candidates for each syllable. The other is to select the best smooth unit sequences, which is done by Viterbi decoding algorithm. For these process, the two distance proposed between syllable units. They are a phonetic environment distance measure and a prosody distance measure. Computer simulation results showed that our proposed algorithm had good performances. Especially, it was shown that pitch and intensity information is also important as like duration information in lip sync.

  • PDF

A Study on Lip Sync and Facial Expression Development in Low Polygon Character Animation (로우폴리곤 캐릭터 애니메이션에서 립싱크 및 표정 개발 연구)

  • Ji-Won Seo;Hyun-Soo Lee;Min-Ha Kim;Jung-Yi Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.409-414
    • /
    • 2023
  • We described how to implement character expressions and animations that play an important role in expressing emotions and personalities in low-polygon character animation. With the development of the video industry, character expressions and mouth-shaped lip-syncing in animation can realize natural movements at a level close to real life. However, for non-experts, it is difficult to use expert-level advanced technology. Therefore, We aimed to present a guide for low-budget low-polygon character animators or non-experts to create mouth-shaped lip-syncing more naturally using accessible and highly usable features. A total of 8 mouth shapes were developed for mouth shape lip-sync animation: 'ㅏ', 'ㅔ', 'ㅣ', 'ㅗ', 'ㅜ', 'ㅡ', 'ㅓ' and a mouth shape that expresses a labial consonant. In the case of facial expression animation, a total of nine animations were produced by adding highly utilized interest, boredom, and pain to the six basic human emotions classified by Paul Ekman: surprise, fear, disgust, anger, happiness, and sadness. This study is meaningful in that it makes it easy to produce natural animation using the features built into the modeling program without using complex technologies or programs.

Embodiment of Low-cost Real Time Lip-Sync Animation System Using Neural Network (신경회로망을 이용한 저가의 실시간 립싱크 애니메이션 시스템의 구현)

  • 강이철;김철기;김미숙;차의영
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.619-621
    • /
    • 2000
  • 최근 인터넷 기술의 발달로 인한 실시간 인터넷 동영상 서비스 등 인터넷을 이용한 방송사업이 활성화되어지고 이에 부가적으로 애니메이션이 감각적 서비스를 지원하고 있는 실정이나 고가의 모션캡쳐 시스템을 통한 캡쳐된 좌표를 적절한 보정을 한 후에 캐릭터를 움직이는 방법으로 이루어지고 있다. 이러한 모션캡쳐 시스템을 통한다면 시스템 및 장비 자체가 고가이고 또한 실시간으로 처리하기 위하여는 좌표값 보정 등이 필요하지만 본 논문에서 제안하는 좌표 추출 및 추적 기법을 이용하여 저가의 가정용 멀티미디어 오버레이 캡쳐보드와 CCD 카메라를 통하여 영상을 캡쳐하고 캡쳐된 영상좌표와 실험용 GDI object를 링크시켜 실시간으로 사람의 입술의 움직임 모양대로 애니메이션이 립싱크되어서 움직여지는 것을 볼 수 있으며, 더 나아가서 외화의 우리말 더빙시 영상처리를 통한 정교한 화면 더빙 및 가상 캐릭터를 이용한 사이버 미팅가지 가능할 것이다.

  • PDF