Design and Implementation of a Real-Time Lipreading System Using PCA & HMM

PCA와 HMM을 이용한 실시간 립리딩 시스템의 설계 및 구현

  • 이지근 (원광대학교 대학원 컴퓨터공학과) ;
  • 이은숙 ((주) 뮤콤 개발팀) ;
  • 정성태 (원광대학교 전기전자및정보공학부) ;
  • 이상설 (원광대학교 전기전자및정보공학부)
  • Published : 2004.11.01


A lot of lipreading system has been proposed to compensate the rate of speech recognition dropped in a noisy environment. Previous lipreading systems work on some specific conditions such as artificial lighting and predefined background color. In this paper, we propose a real-time lipreading system which allows the motion of a speaker and relaxes the restriction on the condition for color and lighting. The proposed system extracts face and lip region from input video sequence captured with a common PC camera and essential visual information in real-time. It recognizes utterance words by using the visual information in real-time. It uses the hue histogram model to extract face and lip region. It uses mean shift algorithm to track the face of a moving speaker. It uses PCA(Principal Component Analysis) to extract the visual information for learning and testing. Also, it uses HMM(Hidden Markov Model) as a recognition algorithm. The experimental results show that our system could get the recognition rate of 90% in case of speaker dependent lipreading and increase the rate of speech recognition up to 40~85% according to the noise level when it is combined with audio speech recognition.