• Title/Summary/Keyword: Lip Reading

Search Result 36, Processing Time 0.022 seconds

The Effects of a Massage and Oro-facial Exercise Program on Spastic Dysarthrics' Lip Muscle Function

  • Hwang, Young-Jin;Jeong, Ok-Ran;Yeom, Ho-Joon
    • Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.55-64
    • /
    • 2004
  • This study was to determine the effects of a massage and oro-facial exercise program on spastic dysarthric patients' lip muscle function using an electromyogram (EMG). Three subjects with Spastic Dysarthria participated in the study. The surface electrodes were positioned on the Levator Labii Superior Muscle (LLSM), Depressor Labii Inferior Muscle (DLIM), and Orbicularis Oris Muscle (OOM). To examine lip muscle function improvement, the EMG signals were analyzed in terms of RMS (Root Mean Square) values and Median Frequency. In addition, the diadochokinetic movements and the rate of sentence reading were measured. The results revealed that the RMS values were decreased and the Median Frequency moved to a high frequency area. Diadochokinesis and sentence reading rates were improved.

  • PDF

Extraction of Lip Region using Chromaticity Transformation and Fuzzy Clustering (색도 변환과 퍼지 클러스터링을 이용한 입술영역 추출)

  • Kim, Jeong Yeop
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.7
    • /
    • pp.806-817
    • /
    • 2014
  • The extraction of lip region is essential to Lip Reading, which is a field of image processing to get some meaningful information by the analysis of lip movement from human face image. Many conventional methods to extract lip region are proposed. One is getting the position of lip by using geometric face structure. The other discriminates lip and skin regions by using color information only. The former is more complex than the latter, however it can analyze black and white image also. The latter is very simple compared to the former, however it is very difficult to discriminate lip and skin regions because of close similarity between these two regions. And also, the accuracy is relatively low compared to the former. Conventional analysis of color coordinate systems are mostly based on specific extraction scheme for lip regions rather than coordinate system itself. In this paper, the method for selection of effective color coordinate system and chromaticity transformation to discriminate these two lip and skin region are proposed.

Lip and Voice Synchronization Using Visual Attention (시각적 어텐션을 활용한 입술과 목소리의 동기화 연구)

  • Dongryun Yoon;Hyeonjoong Cho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.166-173
    • /
    • 2024
  • This study explores lip-sync detection, focusing on the synchronization between lip movements and voices in videos. Typically, lip-sync detection techniques involve cropping the facial area of a given video, utilizing the lower half of the cropped box as input for the visual encoder to extract visual features. To enhance the emphasis on the articulatory region of lips for more accurate lip-sync detection, we propose utilizing a pre-trained visual attention-based encoder. The Visual Transformer Pooling (VTP) module is employed as the visual encoder, originally designed for the lip-reading task, predicting the script based solely on visual information without audio. Our experimental results demonstrate that, despite having fewer learning parameters, our proposed method outperforms the latest model, VocaList, on the LRS2 dataset, achieving a lip-sync detection accuracy of 94.5% based on five context frames. Moreover, our approach exhibits an approximately 8% superiority over VocaList in lip-sync detection accuracy, even on an untrained dataset, Acappella.

Automatic Lip Reading Experiment by the Analysis of Edge (에지 분석에 의한 자동 독화 실험)

  • Lee, Kyong-Ho;Kum, Jong-Ju;Rhee, Sang-Bum
    • Journal of the Korea Computer Industry Society
    • /
    • v.9 no.1
    • /
    • pp.21-28
    • /
    • 2008
  • In this paper, the edge parameters were drawn from speaking image around lip and effective automatic lip reading system to recognize the Korean 'a/e/i/o/u' 5 owels were constructed using the parameter. Speaking images around lip were divided into $5{\times}5$ pane. In each pane the number of digital edge element using Sobel operator were evaluated. The observational error between samples was corrected by using normalization method and the normalized value is used for parameter In the experiment to convince the strength of parameter, 50 normal persons were sampled. The images of 10 persons were analyzed and the images of another 40 persons were experimented for recognition. 500 data are gathered and analyzed. Based on this analysis, the neural net system is constructed and the recognition experiments are performed for 400 data. The neural net system gave the best recognition result of 91.1%.

  • PDF

A Lip-reading Algorithm Using Optical Flow and Properties of Articulatory Phonation (광류와 조음 발성 특성을 이용한 립리딩 알고리즘)

  • Lee, Mi Ae
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.7
    • /
    • pp.745-754
    • /
    • 2018
  • Language is an essential tool for verbal and emotional communication among human beings, enabling them to engage in social interactions. Although a majority of hearing-impaired people can speak; however, they are unable to receive feedback on their pronunciation most of them can speak. However, they do not receive feedback on their pronunciation. This results in impaired communication owing to incorrect pronunciation, which causes difficulties in their social interactions. If hearing-impaired people could receive continuous feedback on their pronunciation and phonation through lip-reading training, they could communicate more effectively with people without hearing disabilities, anytime and anywhere, without the use of sign language. In this study, the mouth area is detected from videos of learners speaking monosyllabic words. The grayscale information of the detected mouth area is used to estimate a velocity vector using Optical Flow. This information is then quantified as feature values to classify vowels. Subsequently, a system is proposed that classifies monosyllables by algebraic computation of geometric feature values of lips using the characteristics of articulatory phonation. Additionally, the system provides feedback by evaluating the comparison between the information which is obtained from the sample categories and experimental results.

An Experimental Multimodal Command Control Interface toy Car Navigation Systems

  • Kim, Kyungnam;Ko, Jong-Gook;SeungHo choi;Kim, Jin-Young;Kim, Ki-Jung
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.249-252
    • /
    • 2000
  • An experimental multimodal system combining natural input modes such as speech, lip movement, and gaze is proposed in this paper. It benefits from novel human-compute. interaction (HCI) modalities and from multimodal integration for tackling the problem of the HCI bottleneck. This system allows the user to select menu items on the screen by employing speech recognition, lip reading, and gaze tracking components in parallel. Face tracking is a supplementary component to gaze tracking and lip movement analysis. These key components are reviewed and preliminary results are shown with multimodal integration and user testing on the prototype system. It is noteworthy that the system equipped with gaze tracking and lip reading is very effective in noisy environment, where the speech recognition rate is low, moreover, not stable. Our long term interest is to build a user interface embedded in a commercial car navigation system (CNS).

  • PDF

Lip Detection using Color Distribution and Support Vector Machine for Visual Feature Extraction of Bimodal Speech Recognition System (바이모달 음성인식기의 시각 특징 추출을 위한 색상 분석자 SVM을 이용한 입술 위치 검출)

  • 정지년;양현승
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.403-410
    • /
    • 2004
  • Bimodal speech recognition systems have been proposed for enhancing recognition rate of ASR under noisy environments. Visual feature extraction is very important to develop these systems. To extract visual features, it is necessary to detect exact lip position. This paper proposed the method that detects a lip position using color similarity model and SVM. Face/Lip color distribution is teamed and the initial lip position is found by using that. The exact lip position is detected by scanning neighbor area with SVM. By experiments, it is shown that this method detects lip position exactly and fast.

Lip-reading System based on Bayesian Classifier (베이지안 분류를 이용한 립 리딩 시스템)

  • Kim, Seong-Woo;Cha, Kyung-Ae;Park, Se-Hyun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.4
    • /
    • pp.9-16
    • /
    • 2020
  • Pronunciation recognition systems that use only video information and ignore voice information can be applied to various customized services. In this paper, we develop a system that applies a Bayesian classifier to distinguish Korean vowels via lip shapes in images. We extract feature vectors from the lip shapes of facial images and apply them to the designed machine learning model. Our experiments show that the system's recognition rate is 94% for the pronunciation of 'A', and the system's average recognition rate is approximately 84%, which is higher than that of the CNN tested for comparison. Our results show that our Bayesian classification method with feature values from lip region landmarks is efficient on a small training set. Therefore, it can be used for application development on limited hardware such as mobile devices.

Korean Lip Reading System Using MobileNet (MobileNet을 이용한 한국어 입모양 인식 시스템)

  • Won-Jong Lee;Joo-Ah Kim;Seo-Won Son;Dong Ho Kim
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.11a
    • /
    • pp.211-213
    • /
    • 2022
  • Lip Reading(독순술(讀脣術)) 이란 입술의 움직임을 보고 상대방이 무슨 말을 하는지 알아내는 기술이다. 본 논문에서는 MBC, SBS 뉴스 클로징 영상에서 쓰이는 문장 10개를 데이터로 사용하고 CNN(Convolutional Neural Network) 아키텍처 중 모바일 기기에서 동작을 목표로 한 MobileNet을 모델로 이용하여 발화자의 입모양을 통해 문장 인식 연구를 진행한 결과를 제시한다. 본 연구는 MobileNet과 LSTM을 활용하여 한국어 입모양을 인식하는데 목적이 있다. 본 연구에서는 뉴스 클로징 영상을 프레임 단위로 잘라 실험 문장 10개를 수집하여 데이터셋(Dataset)을 만들고 발화한 입력 영상으로부터 입술 인식과 검출을 한 후, 전처리 과정을 수행한다. 이후 MobileNet과 LSTM을 이용하여 뉴스 클로징 문장을 발화하는 입모양을 학습 시킨 후 정확도를 알아보는 실험을 진행하였다.

  • PDF

Real-Time Lip Reading System Implementation Based on Deep Learning (딥러닝 기반의 실시간 입모양 인식 시스템 구현)

  • Cho, Dong-Hun;Kim, Won-Jun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.267-269
    • /
    • 2020
  • 입모양 인식(Lip Reading) 기술은 입술 움직임을 통해 발화를 분석하는 기술이다. 본 논문에서는 일상적으로 사용하는 10개의 상용구에 대해서 발화자의 안면 움직임 분석을 통해 실시간으로 분류하는 연구를 진행하였다. 시간상의 연속된 순서를 가진 영상 데이터의 특징을 고려하여 3차원 합성곱 신경망 (Convolutional Neural Network)을 사용하여 진행하였지만, 실시간 시스템 구현을 위해 연산량 감소가 필요했다. 이를 해결하기 위해 차 영상을 이용한 2차원 합성곱 신경망과 LSTM 순환 신경망 (Long Short-Term Memory) 결합 모델을 설계하였고, 해당 모델을 이용하여 실시간 시스템 구현에 성공하였다.

  • PDF