• Title/Summary/Keyword: HMM

Search Result 962, Processing Time 0.03 seconds

Sign Language Spotting Based on Semi-Markov Conditional Random Field (세미-마르코프 조건 랜덤 필드 기반의 수화 적출)

  • Cho, Seong-Sik;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.12
    • /
    • pp.1034-1037
    • /
    • 2009
  • Sign language spotting is the task of detecting the start and end points of signs from continuous data and recognizing the detected signs in the predefined vocabulary. The difficulty with sign language spotting is that instances of signs vary in both motion and shape. Moreover, signs have variable motion in terms of both trajectory and length. Especially, variable sign lengths result in problems with spotting signs in a video sequence, because short signs involve less information and fewer changes than long signs. In this paper, we propose a method for spotting variable lengths signs based on semi-CRF (semi-Markov Conditional Random Field). We performed experiments with ASL (American Sign Language) and KSL (Korean Sign Language) dataset of continuous sign sentences to demonstrate the efficiency of the proposed method. Experimental results show that the proposed method outperforms both HMM and CRF.

Context-adaptive Phoneme Segmentation for a TTS Database (문자-음성 합성기의 데이터 베이스를 위한 문맥 적응 음소 분할)

  • 이기승;김정수
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.2
    • /
    • pp.135-144
    • /
    • 2003
  • A method for the automatic segmentation of speech signals is described. The method is dedicated to the construction of a large database for a Text-To-Speech (TTS) synthesis system. The main issue of the work involves the refinement of an initial estimation of phone boundaries which are provided by an alignment, based on a Hidden Market Model(HMM). Multi-layer perceptron (MLP) was used as a phone boundary detector. To increase the performance of segmentation, a technique which individually trains an MLP according to phonetic transition is proposed. The optimum partitioning of the entire phonetic transition space is constructed from the standpoint of minimizing the overall deviation from hand labelling positions. With single speaker stimuli, the experimental results showed that more than 95% of all phone boundaries have a boundary deviation from the reference position smaller than 20 ms, and the refinement of the boundaries reduces the root mean square error by about 25%.

Comparison of Male/Female Speech Features and Improvement of Recognition Performance by Gender-Specific Speech Recognition (남성과 여성의 음성 특징 비교 및 성별 음성인식에 의한 인식 성능의 향상)

  • Lee, Chang-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.5 no.6
    • /
    • pp.568-574
    • /
    • 2010
  • In an effort to improve the speech recognition rate, we investigated performance comparison between speaker-independent and gender-specific speech recognitions. For this purpose, 20 male and 20 female speakers each pronounced 300 isolated Korean words and the speeches were divided into 4 groups: female, male, and two mixed genders. To examine the validity for the gender-specific speech recognition, Fourier spectrum and MFCC feature vectors averaged over male and female speakers separately were examined. The result showed distinction between the two genders, which supports the motivation for the gender-specific speech recognition. In experiments of speech recognition rate, the error rate for the gender-specific case was shown to be less than50% compared to that of the speaker-independent case. From the obtained results, it might be suggested that hierarchical recognition of gender and speech recognition might yield better performance over the current method of speech recognition.

Sliding Active Camera-based Face Pose Compensation for Enhanced Face Recognition (얼굴 인식률 개선을 위한 선형이동 능동카메라 시스템기반 얼굴포즈 보정 기술)

  • 장승호;김영욱;박창우;박장한;남궁재찬;백준기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.155-164
    • /
    • 2004
  • Recently, we have remarkable developments in intelligent robot systems. The remarkable features of intelligent robot are that it can track user and is able to doface recognition, which is vital for many surveillance-based systems. The advantage of face recognition compared with other biometrics recognition is that coerciveness and contact that usually exist when we acquire characteristics do not exist in face recognition. However, the accuracy of face recognition is lower than other biometric recognition due to the decreasing in dimension from image acquisition step and various changes associated with face pose and background. There are many factors that deteriorate performance of face recognition such as thedistance from camera to the face, changes in lighting, pose change, and change of facial expression. In this paper, we implement a new sliding active camera system to prevent various pose variation that influence face recognition performance andacquired frontal face images using PCA and HMM method to improve the face recognition. This proposed face recognition algorithm can be used for intelligent surveillance system and mobile robot system.

On the Development of a Large-Vocabulary Continuous Speech Recognition System for the Korean Language (대용량 한국어 연속음성인식 시스템 개발)

  • Choi, In-Jeong;Kwon, Oh-Wook;Park, Jong-Ryeal;Park, Yong-Kyu;Kim, Do-Yeong;Jeong, Ho-Young;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.5
    • /
    • pp.44-50
    • /
    • 1995
  • This paper describes a large-vocabulary continuous speech recognition system using continuous hidden Markov models for the Korean language. To improve the performance of the system, we study on the selection of speech modeling units, inter-word modeling, search algorithm, and grammars. We used triphones as basic speech modeling units, generalized triphones and function word-dependent phones are used to improve the trainability of speech units and to reduce errors in function words. Silence between words is optionally inserted by using a silence model and a null transition. Word pair grammar and bigram model based oil word classes are used. Also we implement a search algorithm to find N-best candidate sentences. A postprocessor reorders the N-best sentences using word triple grammar, selects the most likely sentence as the final recognition result, and finally corrects trivial errors related with postpositions. In recognition tests using a 3,000-word continuous speech database, the system attained $93.1\%$ word recognition accuracy and $73.8\%$ sentence recognition accuracy using word triple grammar in postprocessing.

  • PDF

(<한국어 립씽크를 위한 3D 디자인 시스템 연구>)

  • Shin, Dong-Sun;Chung, Jin-Oh
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02b
    • /
    • pp.362-369
    • /
    • 2006
  • 3 차원 그래픽스에 적용하는 한국어 립씽크 합성 체계를 연구하여, 말소리에 대응하는 자연스러운 립씽크를 자동적으로 생성하도록 하는 디자인 시스템을 연구 개발하였다. 페이셜애니메이션은 크게 나누어 감정 표현, 즉 표정의 애니메이션과 대화 시 입술 모양의 변화를 중심으로 하는 대화 애니메이션 부분으로 구분할 수 있다. 표정 애니메이션의 경우 약간의 문화적 차이를 제외한다면 거의 세계 공통의 보편적인 요소들로 이루어지는 반면 대화 애니메이션의 경우는 언어에 따른 차이를 고려해야 한다. 이와 같은 문제로 인해 영어권 및 일본어 권에서 제안되는 음성에 따른 립싱크 합성방법을 한국어에 그대로 적용하면 청각 정보와 시각 정보의 부조화로 인해 지각의 왜곡을 일으킬 수 있다. 본 연구에서는 이와 같은 문제점을 해결하기 위해 표기된 텍스트를 한국어 발음열로 변환, HMM 알고리듬을 이용한 입력 음성의 시분할, 한국어 음소에 따른 얼굴특징점의 3 차원 움직임을 정의하는 과정을 거쳐 텍스트와 음성를 통해 3 차원 대화 애니메이션을 생성하는 한국어 립싱크합성 시스템을 개발 실제 캐릭터 디자인과정에 적용하도록 하였다. 또한 본 연구는 즉시 적용이 가능한 3 차원 캐릭터 애니메이션뿐만 아니라 아바타를 활용한 동적 인터페이스의 요소기술로서 사용될 수 있는 선행연구이기도 하다. 즉 3 차원 그래픽스 기술을 활용하는 영상디자인 분야와 HCI 에 적용할 수 있는 양면적 특성을 지니고 있다. 휴먼 커뮤니케이션은 언어적 대화 커뮤니케이션과 시각적 표정 커뮤니케이션으로 이루어진다. 즉 페이셜애니메이션의 적용은 보다 인간적인 휴먼 커뮤니케이션의 양상을 지니고 있다. 결국 인간적인 상호작용성이 강조되고, 보다 편한 인간적 대화 방식의 휴먼 인터페이스로 그 미래적 양상이 변화할 것으로 예측되는 아바타를 활용한 인터페이스 디자인과 가상현실 분야에 보다 폭넓게 활용될 수 있다.

  • PDF

Real-Time Place Recognition for Augmented Mobile Information Systems (이동형 정보 증강 시스템을 위한 실시간 장소 인식)

  • Oh, Su-Jin;Nam, Yang-Hee
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.5
    • /
    • pp.477-481
    • /
    • 2008
  • Place recognition is necessary for a mobile user to be provided with place-dependent information. This paper proposes real-time video based place recognition system that identifies users' current place while moving in the building. As for the feature extraction of a scene, there have been existing methods based on global feature analysis that has drawback of sensitive-ness for the case of partial occlusion and noises. There have also been local feature based methods that usually attempted object recognition which seemed hard to be applied in real-time system because of high computational cost. On the other hand, researches using statistical methods such as HMM(hidden Markov models) or bayesian networks have been used to derive place recognition result from the feature data. The former is, however, not practical because it requires huge amounts of efforts to gather the training data while the latter usually depends on object recognition only. This paper proposes a combined approach of global and local feature analysis for feature extraction to complement both approaches' drawbacks. The proposed method is applied to a mobile information system and shows real-time performance with competitive recognition result.

On the Development of a Continuous Speech Recognition System Using Continuous Hidden Markov Model for Korean Language (연속분포 HMM을 이용한 한국어 연속 음성 인식 시스템 개발)

  • Kim, Do-Yeong;Park, Yong-Kyu;Kwon, Oh-Wook;Un, Chong-Kwan;Park, Seong-Hyun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.1
    • /
    • pp.24-31
    • /
    • 1994
  • In this paper, we report on the development of a speaker independent continuous speech recognition system using continuous hidden Markov models. The continuous hidden Markov model consists of mean and covariance matrices and directly models speech signal parameters, therefore does not have quantization error. Filter bank coefficients with their 1st and 2nd-order derivatives are used as feature vectors to represent the dynamic features of speech signal. We use the segmental K-means algorithm as a training algorithm and triphone as a recognition unit to alleviate performance degradation due to coarticulation problems critical in continuous speech recognition. Also, we use the one-pass search algorithm that Is advantageous in speeding-up the recognition time. Experimental results show that the system attains the recognition accuracy of $83\%$ without grammar and $94\%$ with finite state networks in speaker-indepdent speech recognition.

  • PDF

Study on the Improvement of Speech Recognizer by Using Time Scale Modification (시간축 변환을 이용한 음성 인식기의 성능 향상에 관한 연구)

  • 이기승
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.6
    • /
    • pp.462-472
    • /
    • 2004
  • In this paper a method for compensating for thp performance degradation or automatic speech recognition (ASR) is proposed. which is mainly caused by speaking rate variation. Before the new method is proposed. quantitative analysis of the performance of an HMM-based ASR system according to speaking rate is first performed. From this analysis, significant performance degradation was often observed in the rapidly speaking speech signals. A quantitative measure is then introduced, which is able to represent speaking rate. Time scale modification (TSM) is employed to compensate the speaking rate difference between input speech signals and training speech signals. Finally, a method for compensating the performance degradation caused by speaking rate variation is proposed, in which TSM is selectively employed according to speaking rate. By the results from the ASR experiments devised for the 10-digits mobile phone number, it is confirmed that the error rate was reduced by 15.5% when the proposed method is applied to the high speaking rate speech signals.

The Reduction or computation in MLLR Framework using PCA or ICA for Speaker Adaptation (화자적응에서 PCA 또는 ICA를 이용한 MLLR알고리즘 연산량 감소)

  • 김지운;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.6
    • /
    • pp.452-456
    • /
    • 2003
  • We discuss how to reduce the number of inverse matrix and its dimensions requested in MLLR framework for speaker adaptation. To find a smaller set of variables with less redundancy, we adapt PCA (principal component analysis) and ICA (independent component analysis) that would give as good a representation as possible. The amount of additional computation when PCA or ICA is applied is as small as it can be disregarded. 10 components for ICA and 12 components for PCA represent similar performance with 36 components for ordinary MLLR framework. If dimension of SI model parameter is n, the amount of computation of inverse matrix in MLLR is proportioned to O(n⁴). So, compared with ordinary MLLR, the amount of total computation requested in speaker adaptation is reduced by about 1/81 in MLLR with PCA and 1/167 in MLLR with ICA.