• Title/Summary/Keyword: Speech recognition model

Search Result 618, Processing Time 0.024 seconds

Intra-and Inter-frame Features for Automatic Speech Recognition

  • Lee, Sung Joo;Kang, Byung Ok;Chung, Hoon;Lee, Yunkeun
    • ETRI Journal
    • /
    • 제36권3호
    • /
    • pp.514-517
    • /
    • 2014
  • In this paper, alternative dynamic features for speech recognition are proposed. The goal of this work is to improve speech recognition accuracy by deriving the representation of distinctive dynamic characteristics from a speech spectrum. This work was inspired by two temporal dynamics of a speech signal. One is the highly non-stationary nature of speech, and the other is the inter-frame change of a speech spectrum. We adopt the use of a sub-frame spectrum analyzer to capture very rapid spectral changes within a speech analysis frame. In addition, we attempt to measure spectral fluctuations of a more complex manner as opposed to traditional dynamic features such as delta or double-delta. To evaluate the proposed features, speech recognition tests over smartphone environments were conducted. The experimental results show that the feature streams simply combined with the proposed features are effective for an improvement in the recognition accuracy of a hidden Markov model-based speech recognizer.

한국인의 영어 인식을 위한 문맥 종속성 기반 음향모델/발음모델 적응 (Acoustic and Pronunciation Model Adaptation Based on Context dependency for Korean-English Speech Recognition)

  • 오유리;김홍국;이연우;이성로
    • 대한음성학회지:말소리
    • /
    • 제68권
    • /
    • pp.33-47
    • /
    • 2008
  • In this paper, we propose a hybrid acoustic and pronunciation model adaptation method based on context dependency for Korean-English speech recognition. The proposed method is performed as follows. First, in order to derive pronunciation variant rules, an n-best phoneme sequence is obtained by phone recognition. Second, we decompose each rule into a context independent (CI) or a context dependent (CD) one. To this end, it is assumed that a different phoneme structure between Korean and English makes CI pronunciation variabilities while coarticulation effects are related to CD pronunciation variabilities. Finally, we perform an acoustic model adaptation and a pronunciation model adaptation for CI and CD pronunciation variabilities, respectively. It is shown from the Korean-English speech recognition experiments that the average word error rate (WER) is decreased by 36.0% when compared to the baseline that does not include any adaptation. In addition, the proposed method has a lower average WER than either the acoustic model adaptation or the pronunciation model adaptation.

  • PDF

가변어휘 핵심어 검출을 위한 비핵심어 모델링 및 후처리 성능평가 (Performance Evaluation of Nonkeyword Modeling and Postprocessing for Vocabulary-independent Keyword Spotting)

  • 김형순;김영국;신영욱
    • 음성과학
    • /
    • 제10권3호
    • /
    • pp.225-239
    • /
    • 2003
  • In this paper, we develop a keyword spotting system using vocabulary-independent speech recognition technique, and investigate several non-keyword modeling and post-processing methods to improve its performance. In order to model non-keyword speech segments, monophone clustering and Gaussian Mixture Model (GMM) are considered. We employ likelihood ratio scoring method for the post-processing schemes to verify the recognition results, and filler models, anti-subword models and N-best decoding results are considered as an alternative hypothesis for likelihood ratio scoring. We also examine different methods to construct anti-subword models. We evaluate the performance of our system on the automatic telephone exchange service task. The results show that GMM-based non-keyword modeling yields better performance than that using monophone clustering. According to the post-processing experiment, the method using anti-keyword model based on Kullback-Leibler distance and N-best decoding method show better performance than other methods, and we could reduce more than 50% of keyword recognition errors with keyword rejection rate of 5%.

  • PDF

Style-Specific Language Model Adaptation using TF*IDF Similarity for Korean Conversational Speech Recognition

  • Park, Young-Hee;Chung, Min-Hwa
    • The Journal of the Acoustical Society of Korea
    • /
    • 제23권2E호
    • /
    • pp.51-55
    • /
    • 2004
  • In this paper, we propose a style-specific language model adaptation scheme using n-gram based tf*idf similarity for Korean spontaneous speech recognition. Korean spontaneous speech shows especially different style-specific characteristics such as filled pauses, word omission, and contraction, which are related to function words and depend on preceding or following words. To reflect these style-specific characteristics and overcome insufficient data for training language model, we estimate in-domain dependent n-gram model by relevance weighting of out-of-domain text data according to their n-. gram based tf*idf similarity, in which in-domain language model include disfluency model. Recognition results show that n-gram based tf*idf similarity weighting effectively reflects style difference.

Bayesian 적응 방식을 이용한 잡음음성 인식에 관한 연구 (A Study on Noisy Speech Recognition Using a Bayesian Adaptation Method)

  • 정용주
    • 한국음향학회지
    • /
    • 제20권2호
    • /
    • pp.21-26
    • /
    • 2001
  • 본 논문에서는 잡음에 강인한 음성인식을 위해서 expectation-maximization (EM) 방식을 이용하여 잡음의 평균값을 추정하는 새로운 알고리듬을 제안하였다. 제안된 알고리듬에서는 온라인상의 인식용 음성이 직접 Bayesian 적응을 위해서 사용되며, 또한 훈련데이터를 이용하여 잡음의 평균값에 대한 사전 (prior) 분포를 알아낸 후 Bayesian 적응시에 이용한다. 잡음 음성의 모델링을 위해서는 PMC (parallel model combination) 방식을 이용하였고, 제안된 방식을 이용하여 자동차 잡음 환경 하에서 인식 실험을 수행한 결과, 기존의 PMC 방식에 비해서 향상된 인식성능을 보임을 알 수 있었다.

  • PDF

피지에 기초를 둔 HMM을 이용한 음성 인식 (Speech Recognition Using HMM Based on Fuzzy)

  • 안태옥;김순협
    • 전자공학회논문지B
    • /
    • 제28B권12호
    • /
    • pp.68-74
    • /
    • 1991
  • This paper proposes a HMM model based on fuzzy, as a method on the speech recognition of speaker-independent. In this recognition method, multi-observation sequences which give proper probabilities by fuzzy rule according to order of short distance from VQ codebook are obtained. Thereafter, the HMM model using this multi-observation sequences is generated, and in case of recognition, a word that has the most highest probability is selected as a recognized word. The vocabularies for recognition experiment are 146 DDD are names, and the feature parameter is 10S0thT LPC cepstrum coefficients. Besides the speech recognition experiments of proposed model, for comparison with it, we perform the experiments by DP, MSVQ and general HMM under same condition and data. Through the experiment results, it is proved that HMM model using fuzzy proposed in this paper is superior to DP method, MSVQ and general HMM model in recognition rate and computational time.

  • PDF

로봇 시스템에의 적용을 위한 음성 및 화자인식 알고리즘 (Implementation of the Auditory Sense for the Smart Robot: Speaker/Speech Recognition)

  • 조현;김경호;박영진
    • 한국소음진동공학회:학술대회논문집
    • /
    • 한국소음진동공학회 2007년도 춘계학술대회논문집
    • /
    • pp.1074-1079
    • /
    • 2007
  • We will introduce speech/speaker recognition algorithm for the isolated word. In general case of speaker verification, Gaussian Mixture Model (GMM) is used to model the feature vectors of reference speech signals. On the other hand, Dynamic Time Warping (DTW) based template matching technique was proposed for the isolated word recognition in several years ago. We combine these two different concepts in a single method and then implement in a real time speaker/speech recognition system. Using our proposed method, it is guaranteed that a small number of reference speeches (5 or 6 times training) are enough to make reference model to satisfy 90% of recognition performance.

  • PDF

DSR 환경에서의 다 모델 음성 인식시스템의 성능 향상 방법에 관한 연구 (A Study on Performance Improvement Method for the Multi-Model Speech Recognition System in the DSR Environment)

  • 장현백;정용주
    • 융합신호처리학회논문지
    • /
    • 제11권2호
    • /
    • pp.137-142
    • /
    • 2010
  • 다 모델 음성인식기는 잡음환경에서 매우 우수한 성능을 보이는 것으로 평가되고 있다. 그러나 지금까지 다 모델 기반인식기의 성능시험에는 잡음에 대한 적응을 고려하지 않은 일반적인 전처리 방식이 주로 활용하였다. 본 논문에서는 보다 정확한 다 모델 기반인식기에 대한 성능 평가를 위해서 잡음에 대한 강인성이 충분히 고려된 전처리 방식을 채택하였다. 채택된 전처리 알고리듬은 ETSI (European Telecommunications Standards Institute)에서 DSR (Distributed Speech Recognition) 잡음환경을 위해서 제안된 AFE (Advanced Front-End) 방식이며 성능비교를 위해서 DSR 환경에서 좋은 성능을 나타낸 것으로 알려진 MTR (Multi-Style Training)을 사용하였다. 또한, 본 논문에서는 다 모델 기반인식기의 구조를 개선하여 인식성능의 향상을 이루고자 하였다. 기존의 방식과 달리 잡음음성과 가장 가까운 N개의 기준 HMM을 사용하여 기준 HMM의 선택시에 발생할 수 있는 오류 및 잡음신호의 변이에 대한 대비를 하도록 하였으며 각각의 기준 HMM을 훈련을 위해서 다수의 SNR 값을 이용함으로서 구축된 음향모델의 강인성을 높일 수 있도록 하였다. Aurora 2 데이터베이스에 대한 인식실험결과 개선된 다 모델기반인식기는 기존의 방식에 비해서 보다 향상된 인식성능을 보임을 알 수 있었다.

감정 적응을 이용한 감정 화자 인식 (Emotional Speaker Recognition using Emotional Adaptation)

  • 김원구
    • 전기학회논문지
    • /
    • 제66권7호
    • /
    • pp.1105-1110
    • /
    • 2017
  • Speech with various emotions degrades the performance of the speaker recognition system. In this paper, a speaker recognition method using emotional adaptation has been proposed to improve the performance of speaker recognition system using affective speech. For emotional adaptation, emotional speaker model was generated from speaker model without emotion using a small number of training affective speech and speaker adaptation method. Since it is not easy to obtain a sufficient affective speech for training from a speaker, it is very practical to use a small number of affective speeches in a real situation. The proposed method was evaluated using a Korean database containing four emotions. Experimental results show that the proposed method has better performance than conventional methods in speaker verification and speaker recognition.

Speech Feature Extraction Based on the Human Hearing Model

  • Chung, Kwang-Woo;Kim, Paul;Hong, Kwang-Seok
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 1996년도 10월 학술대회지
    • /
    • pp.435-447
    • /
    • 1996
  • In this paper, we propose the method that extracts the speech feature using the hearing model through signal processing techniques. The proposed method includes the following procedure ; normalization of the short-time speech block by its maximum value, multi-resolution analysis using the discrete wavelet transformation and re-synthesize using the discrete inverse wavelet transformation, differentiation after analysis and synthesis, full wave rectification and integration. In order to verify the performance of the proposed speech feature in the speech recognition task, korean digit recognition experiments were carried out using both the DTW and the VQ-HMM. The results showed that, in the case of using DTW, the recognition rates were 99.79% and 90.33% for speaker-dependent and speaker-independent task respectively and, in the case of using VQ-HMM, the rate were 96.5% and 81.5% respectively. And it indicates that the proposed speech feature has the potential for use as a simple and efficient feature for recognition task

  • PDF