• Title/Summary/Keyword: Bimodal speech recognition

Search Result 17, Processing Time 0.022 seconds

Robustness of Bimodal Speech Recognition on Degradation of Lip Parameter Estimation Performance (음성인식에서 입술 파라미터 열화에 따른 견인성 연구)

  • Kim, Jin-Young;Min, So-Hee;Choi, Seung-Ho
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.27-33
    • /
    • 2003
  • Bimodal speech recognition based on lip reading has been studied as a representative method of speech recognition under noisy environments. There are three integration methods of speech and lip modalities as like direct identification, separate identification and dominant recording. In this paper we evaluate the robustness of lip reading methods under the assumption that lip parameters are estimated with errors. We show that the dominant recording approach is more robust than other methods through lip reading experiments.

  • PDF

Improved Bimodal Speech Recognition Study Based on Product Hidden Markov Model

  • Xi, Su Mei;Cho, Young Im
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.164-170
    • /
    • 2013
  • Recent years have been higher demands for automatic speech recognition (ASR) systems that are able to operate robustly in an acoustically noisy environment. This paper proposes an improved product hidden markov model (HMM) used for bimodal speech recognition. A two-dimensional training model is built based on dependently trained audio-HMM and visual-HMM, reflecting the asynchronous characteristics of the audio and video streams. A weight coefficient is introduced to adjust the weight of the video and audio streams automatically according to differences in the noise environment. Experimental results show that compared with other bimodal speech recognition approaches, this approach obtains better speech recognition performance.

Subword-based Lip Reading Using State-tied HMM (상태공유 HMM을 이용한 서브워드 단위 기반 립리딩)

  • Kim, Jin-Young;Shin, Do-Sung
    • Speech Sciences
    • /
    • v.8 no.3
    • /
    • pp.123-132
    • /
    • 2001
  • In recent years research on HCI technology has been very active and speech recognition is being used as its typical method. Its recognition, however, is deteriorated with the increase of surrounding noise. To solve this problem, studies concerning the multimodal HCI are being briskly made. This paper describes automated lipreading for bimodal speech recognition on the basis of image- and speech information. It employs audio-visual DB containing 1,074 words from 70 voice and tri-viseme as a recognition unit, and state tied HMM as a recognition model. Performance of automated recognition of 22 to 1,000 words are evaluated to achieve word recognition of 60.5% in terms of 22word recognizer.

  • PDF

A Study on the Robust Bimodal Speech-recognition System in Noisy Environments (잡음 환경에 강인한 이중모드 음성인식 시스템에 관한 연구)

  • 이철우;고인선;계영철
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.1
    • /
    • pp.28-34
    • /
    • 2003
  • Recent researches have been focusing on jointly using lip motions (i.e. visual speech) and speech for reliable speech recognitions in noisy environments. This paper also deals with the method of combining the result of the visual speech recognizer and that of the conventional speech recognizer through putting weights on each result: the paper proposes the method of determining proper weights for each result and, in particular, the weights are autonomously determined, depending on the amounts of noise in the speech and the image quality. Simulation results show that combining the audio and visual recognition by the proposed method provides the recognition performance of 84% even in severely noisy environments. It is also shown that in the presence of blur in images, the newly proposed weighting method, which takes the blur into account as well, yields better performance than the other methods.

A Study on Combining Bimodal Sensors for Robust Speech Recognition (강인한 음성인식을 위한 이중모드 센서의 결합방식에 관한 연구)

  • 이철우;계영철;고인선
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.6
    • /
    • pp.51-56
    • /
    • 2001
  • Recent researches have been focusing on jointly using lip motions and speech for reliable speech recognitions in noisy environments. To this end, this paper proposes the method of combining the visual speech recognizer and the conventional speech recognizer with each output properly weighted. In particular, we propose the method of autonomously determining the weights, depending on the amounts of noise in the speech. The correlations between adjacent speech samples and the residual errors of the LPC analysis are used for this determination. Simulation results show that the speech recognizer combined in this way provides the recognition performance of 83 % even in severely noisy environments.

  • PDF

Robustness of Bimodal Speech Recognition on Degradation of Lip Parameter Estimation Performance (음성인식에서 입술 파라미터 열화에 따른 견인성 연구)

  • Kim Jinyoung;Shin Dosung;Choi Seungho
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.205-208
    • /
    • 2002
  • Bimodal speech recognition based on lip reading has been studied as a representative method of speech recognition under noisy environments. There are three integration methods of speech and lip modalities as like direct identification, separate identification and dominant recording. In this paper we evaluate the robustness of lip reading methods under the assumption that lip parameters are estimated with errors. We show that the dominant recording approach is more robust than other methods with lip reading experiments. Also, a measure of lip parameter degradation is proposed. This measure can be used in the determination of weighting values of video information.

  • PDF

Comparison of Integration Methods of Speech and Lip Information in the Bi-modal Speech Recognition (바이모달 음성인식의 음성정보와 입술정보 결합방법 비교)

  • 박병구;김진영;최승호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.4
    • /
    • pp.31-37
    • /
    • 1999
  • A bimodal speech recognition using visual and audio information has been proposed and researched to improve the performance of ASR(Automatic Speech Recognition) system in noisy environments. The integration method of two modalities can be usually classified into an early integration and a late integration. The early integration method includes a method using a fixed weight of lip parameters and a method using a variable weight according to speech SNR information. The 4 late integration methods are a method using audio and visual information independently, a method using speech optimal path, a method using lip optimal path and a way using speech SNR information. Among these 6 methods, the method using the fixed weight of lip parameter showed a better recognition rate.

  • PDF

Lip Detection using Color Distribution and Support Vector Machine for Visual Feature Extraction of Bimodal Speech Recognition System (바이모달 음성인식기의 시각 특징 추출을 위한 색상 분석자 SVM을 이용한 입술 위치 검출)

  • 정지년;양현승
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.403-410
    • /
    • 2004
  • Bimodal speech recognition systems have been proposed for enhancing recognition rate of ASR under noisy environments. Visual feature extraction is very important to develop these systems. To extract visual features, it is necessary to detect exact lip position. This paper proposed the method that detects a lip position using color similarity model and SVM. Face/Lip color distribution is teamed and the initial lip position is found by using that. The exact lip position is detected by scanning neighbor area with SVM. By experiments, it is shown that this method detects lip position exactly and fast.

Speech Recognition by Integrating Audio, Visual and Contextual Features Based on Neural Networks (신경망 기반 음성, 영상 및 문맥 통합 음성인식)

  • 김명원;한문성;이순신;류정우
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.67-77
    • /
    • 2004
  • The recent research has been focused on fusion of audio and visual features for reliable speech recognition in noisy environments. In this paper, we propose a neural network based model of robust speech recognition by integrating audio, visual, and contextual information. Bimodal Neural Network(BMNN) is a multi-layer perception of 4 layers, each of which performs a certain level of abstraction of input features. In BMNN the third layer combines audio md visual features of speech to compensate loss of audio information caused by noise. In order to improve the accuracy of speech recognition in noisy environments, we also propose a post-processing based on contextual information which are sequential patterns of words spoken by a user. Our experimental results show that our model outperforms any single mode models. Particularly, when we use the contextual information, we can obtain over 90% recognition accuracy even in noisy environments, which is a significant improvement compared with the state of art in speech recognition. Our research demonstrates that diverse sources of information need to be integrated to improve the accuracy of speech recognition particularly in noisy environments.

Performance Comparison and Verification of Lip Parameter Selection Methods in the Bimodal Speech ]Recognition System (입술 파라미터 선정에 따른 바이모달 음성인식 성능 비교 및 검증)

  • 박병구;김진영;임재열
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.3
    • /
    • pp.68-72
    • /
    • 1999
  • The choice of parameters from various lip information and the robustness of extracting lip parameters play important roles in the performance of bimodal speech recognition system. In this paper, lip parameters are extracted by using an automatic extraction algorithm and inner lip parameters effect on the recognition rate more than outer lip parameters. Compared with a manual extraction algorithm, the automatic extraction method is evaluated about its robustness.

  • PDF