• Title/Summary/Keyword: 독순술

Search Result 3, Processing Time 0.018 seconds

Korean Lip Reading System Using MobileNet (MobileNet을 이용한 한국어 입모양 인식 시스템)

  • Won-Jong Lee;Joo-Ah Kim;Seo-Won Son;Dong Ho Kim
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.11a
    • /
    • pp.211-213
    • /
    • 2022
  • Lip Reading(독순술(讀脣術)) 이란 입술의 움직임을 보고 상대방이 무슨 말을 하는지 알아내는 기술이다. 본 논문에서는 MBC, SBS 뉴스 클로징 영상에서 쓰이는 문장 10개를 데이터로 사용하고 CNN(Convolutional Neural Network) 아키텍처 중 모바일 기기에서 동작을 목표로 한 MobileNet을 모델로 이용하여 발화자의 입모양을 통해 문장 인식 연구를 진행한 결과를 제시한다. 본 연구는 MobileNet과 LSTM을 활용하여 한국어 입모양을 인식하는데 목적이 있다. 본 연구에서는 뉴스 클로징 영상을 프레임 단위로 잘라 실험 문장 10개를 수집하여 데이터셋(Dataset)을 만들고 발화한 입력 영상으로부터 입술 인식과 검출을 한 후, 전처리 과정을 수행한다. 이후 MobileNet과 LSTM을 이용하여 뉴스 클로징 문장을 발화하는 입모양을 학습 시킨 후 정확도를 알아보는 실험을 진행하였다.

  • PDF

Lip-reading System based on Bayesian Classifier (베이지안 분류를 이용한 립 리딩 시스템)

  • Kim, Seong-Woo;Cha, Kyung-Ae;Park, Se-Hyun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.4
    • /
    • pp.9-16
    • /
    • 2020
  • Pronunciation recognition systems that use only video information and ignore voice information can be applied to various customized services. In this paper, we develop a system that applies a Bayesian classifier to distinguish Korean vowels via lip shapes in images. We extract feature vectors from the lip shapes of facial images and apply them to the designed machine learning model. Our experiments show that the system's recognition rate is 94% for the pronunciation of 'A', and the system's average recognition rate is approximately 84%, which is higher than that of the CNN tested for comparison. Our results show that our Bayesian classification method with feature values from lip region landmarks is efficient on a small training set. Therefore, it can be used for application development on limited hardware such as mobile devices.

Lip and Voice Synchronization Using Visual Attention (시각적 어텐션을 활용한 입술과 목소리의 동기화 연구)

  • Dongryun Yoon;Hyeonjoong Cho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.166-173
    • /
    • 2024
  • This study explores lip-sync detection, focusing on the synchronization between lip movements and voices in videos. Typically, lip-sync detection techniques involve cropping the facial area of a given video, utilizing the lower half of the cropped box as input for the visual encoder to extract visual features. To enhance the emphasis on the articulatory region of lips for more accurate lip-sync detection, we propose utilizing a pre-trained visual attention-based encoder. The Visual Transformer Pooling (VTP) module is employed as the visual encoder, originally designed for the lip-reading task, predicting the script based solely on visual information without audio. Our experimental results demonstrate that, despite having fewer learning parameters, our proposed method outperforms the latest model, VocaList, on the LRS2 dataset, achieving a lip-sync detection accuracy of 94.5% based on five context frames. Moreover, our approach exhibits an approximately 8% superiority over VocaList in lip-sync detection accuracy, even on an untrained dataset, Acappella.