• Title/Summary/Keyword: 분산 음성 인식

Search Result 56, Processing Time 0.027 seconds

A Study on the PMC Adaptation for Speech Recognition under Noisy Conditions (잡음 환경에서의 음성인식을 위한 PMC 적응에 관한 연구)

  • 김현기
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.7 no.3
    • /
    • pp.9-14
    • /
    • 2002
  • In this paper we propose a method for performance enhancement of speech recognizer under noisy conditions. The parallel combination model which is presented at the PMC method using multiple Gaussian-distributed mixtures have been adapted to the variation of each mixture. The CDHMM(continuous observation density HMM) which has multiple Gaussian distributed mixtures are combined by the proposed PMC method. Also, the EM(expectation maximization) algorithm is used for adapting the model mean parameter in order to reduce the variation of the mixture density. The result of simulation, the proposed PMC adaptation method show better performance than the conventional PMC method.

  • PDF

Nose Estimation and Suppression methods based on Normalized Variance in Time-Frequency for Speech Enhancement (음성강화를 위한 시간 및 주파수 도메인의 분산정규화 기반 잡음예측 및 저감방법)

  • Lee, Soo-Jeong;Kim, Soon-Hyob
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.1
    • /
    • pp.87-94
    • /
    • 2009
  • Noise estimation and suppression are a crucial factor of many speech communication and recognition systems. In this paper, proposed algorithm is based on the ratio of variance normalized of noisy power spectrum in time-frequency domain. Our proposed algorithm tracks the threshold and controls the trade-off between residual noise and distortion. This algorithm is evaluated by the ITU-T P.835 signal distortion (SIG) and segment signal to noise ratio (SNR), and is superior to the conventional methods.

A Study on Regression Class Generation of MLLR Adaptation Using State Level Sharing (상태레벨 공유를 이용한 MLLR 적응화의 회귀클래스 생성에 관한 연구)

  • 오세진;성우창;김광동;노덕규;송민규;정현열
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.8
    • /
    • pp.727-739
    • /
    • 2003
  • In this paper, we propose a generation method of regression classes for adaptation in the HM-Net (Hidden Markov Network) system. The MLLR (Maximum Likelihood Linear Regression) adaptation approach is applied to the HM-Net speech recognition system for expressing the characteristics of speaker effectively and the use of HM-Net in various tasks. For the state level sharing, the context domain state splitting of PDT-SSS (Phonetic Decision Tree-based Successive State Splitting) algorithm, which has the contextual and time domain clustering, is adopted. In each state of contextual domain, the desired phoneme classes are determined by splitting the context information (classes) including target speaker's speech data. The number of adaptation parameters, such as means and variances, is autonomously controlled by contextual domain state splitting of PDT-SSS, depending on the context information and the amount of adaptation utterances from a new speaker. The experiments are performed to verify the effectiveness of the proposed method on the KLE (The center for Korean Language Engineering) 452 data and YNU (Yeungnam Dniv) 200 data. The experimental results show that the accuracies of phone, word, and sentence recognition system increased by 34∼37%, 9%, and 20%, respectively, Compared with performance according to the length of adaptation utterances, the performance are also significantly improved even in short adaptation utterances. Therefore, we can argue that the proposed regression class method is well applied to HM-Net speech recognition system employing MLLR speaker adaptation.

Auditory Representations for Robust Speech Recognition in Noisy Environments (잡음 환경에서의 음성 인식을 위한 청각 표현)

  • Kim, Doh-Suk;Lee, Soo-Young;Kil, Rhee-M.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.5
    • /
    • pp.90-98
    • /
    • 1996
  • An auditory model is proposed for robust speech recognition in noisy environments. The model consists of cochlear bandpass filters and nonlinear stages, and represents frequency and intensity information efficiently even in noisy environments. Frequency information of the signal is obtained by zero-crossing intervals, and intensity information is also incorporated by peak detectors and saturating nonlinearities. Also, the robustness of the zero-crossings in estimating frequency is verified by the developed analytic relationship of the variance of the level-crossing interval perturbations as a function of the crossing level values. The proposed auditory model is computationally efficient and free from many unknown parameters compared with other auditory models. Speaker-independent speech recognition experiments demonstrate the robustness of the proposed method.

  • PDF

Speaker-Independent Korean Digit Recognition Using HCNN with Weighted Distance Measure (가중 거리 개념이 도입된 HCNN을 이용한 화자 독립 숫자음 인식에 관한 연구)

  • 김도석;이수영
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.10
    • /
    • pp.1422-1432
    • /
    • 1993
  • Nonlinear mapping function of the HCNN( Hidden Control Neural Network ) can change over time to model the temporal variability of a speech signal by combining the nonlinear prediction of conventional neural networks with the segmentation capability of HMM. We have two things in this paper. first, we showed that the performance of the HCNN is better than that of HMM. Second, the HCNN with its prediction error measure given by weighted distance is proposed to use suitable distance measure for the HCNN, and then we showed that the superiority of the proposed system for speaker-independent speech recognition tasks. Weighted distance considers the differences between the variances of each component of the feature vector extraced from the speech data. Speaker-independent Korean digit recognition experiment showed that the recognition rate of 95%was obtained for the HCNN with Euclidean distance. This result is 1.28% higher than HMM, and shows that the HCNN which models the dynamical system is superior to HMM which is based on the statistical restrictions. And we obtained 97.35% for the HCNN with weighted distance, which is 2.35% better than the HCNN with Euclidean distance. The reason why the HCNN with weighted distance shows better performance is as follows : it reduces the variations of the recognition error rate over different speakers by increasing the recognition rate for the speakers who have many misclassified utterances. So we can conclude that the HCNN with weighted distance is more suit-able for speaker-independent speech recognition tasks.

  • PDF

Speaker Adaptation Algorithm Based on a Maximization of the Observation Probability (관찰 확률 최대화에 의한 화자 적응 알고리즘)

  • 양태영;신원호;전원석;김지성;김지성;김원구;이충용;윤대희;차일환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.6
    • /
    • pp.37-42
    • /
    • 1998
  • 본 논문에서는 SCHMM에 적용된 관찰 확률 최대화에 의한 화자 적응 알고리즘을 제안한다. 제안된 알고리즘은 SCHMM의 관찰 확률 밀도들이 새로운 화자의 음성 특징을 잘 표현하지 못하는 경우 인식 성능이 저하되는 것을 막기 위하여, 적응 데이터의 각 특징 벡터들이 최대의 관찰 확률을 가질 수 있도록 관찰 확률 밀도를 결정하는 평균 벡터 μ와 분산 행렬 Σ를 기울기 탐색(gradient search) 알고리즘에 의해 반복적으로 적응시켜 주는 방법이다. SCHMM의 상태 천이 확률 A와 혼합 밀도 계수 C는 관찰 확률 밀도 적응 과정 을 거친 후, 적응 데이터로부터 구한 확률과 기존 확률의 가중 평균을 취하는 과정을 반복 하여 적응시켜 주었다. 제안된 화자 적응 알고리즘을 사용하여 단독음 인식 실험을 수행한 결과, 화자 적응을 수행하지 않았을 때와 비교하여 화자 독립 시스템에서는 평균 9.8%, 남 성 화자 종속 시스템에서는 평균 46.0%, 여성 화자 종속 시스템에서는 평균 52.7%의 인식 률 향상을 보였다.

  • PDF

Scalable High-quality Speech Reconstruction in Distributed Speech Recognition Environments (분산음성인식 환경에서 서버에서의 스케일러블 고품질 음성복원)

  • Yoon, Jae-Sam;Kim, Hong-Kook;Kang, Byung-Ok
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.423-424
    • /
    • 2007
  • In this paper, we propose a scalable high-quality speech reconstruction method for distributed speech recognition (DSR). It is difficult to reconstruct speech of high quality with MFCCs at the DSR server. Depending on the bit-rate available by the DSR system, we can send additional information associated with speech coding to the DSR sorrel, where the bit-rate is variable from 4.8 kbit/s to 11.4 kbit/s. The experimental results show that the speech quality reproduced by the proposed method when the bit-rate is 11.4 kbit/s is comparable with that of ITU-T G.729 under both ideal channel and frame error channel conditions while the performance of DSR is maintained to that of wireline speech recognition.

  • PDF

Robust Feature Normalization Scheme Using Separated Eigenspace in Noisy Environments (분리된 고유공간을 이용한 잡음환경에 강인한 특징 정규화 기법)

  • Lee Yoonjae;Ko Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.4
    • /
    • pp.210-216
    • /
    • 2005
  • We Propose a new feature normalization scheme based on eigenspace for achieving robust speech recognition. In general, mean and variance normalization (MVN) is Performed in cepstral domain. However, another MVN approach using eigenspace was recently introduced. in that the eigenspace normalization Procedure Performs normalization in a single eigenspace. This Procedure consists of linear PCA matrix feature transformation followed by mean and variance normalization of the transformed cepstral feature. In this method. 39 dimensional feature distribution is represented using only a single eigenspace. However it is observed to be insufficient to represent all data distribution using only a sin91e eigenvector. For more specific representation. we apply unique na independent eigenspaces to cepstra, delta and delta-delta cepstra respectively in this Paper. We also normalize training data in eigenspace and get the model from the normalized training data. Finally. a feature space rotation procedure is introduced to reduce the mismatch of training and test data distribution in noisy condition. As a result, we obtained a substantial recognition improvement over the basic eigenspace normalization.

A VQ Codebook Design Based on Phonetic Distribution for Distributed Speech Recognition (분산 음성인식 시스템의 성능향상을 위한 음소 빈도 비율에 기반한 VQ 코드북 설계)

  • Oh Yoo-Rhee;Yoon Jae-Sam;Lee Gil-Ho;Kim Hong-Kook;Ryu Chang-Sun;Koo Myoung-Wa
    • Proceedings of the KSPS conference
    • /
    • 2006.05a
    • /
    • pp.37-40
    • /
    • 2006
  • In this paper, we propose a VQ codebook design of speech recognition feature parameters in order to improve the performance of a distributed speech recognition system. For the context-dependent HMMs, a VQ codebook should be correlated with phonetic distributions in the training data for HMMs. Thus, we focus on a selection method of training data based on phonetic distribution instead of using all the training data for an efficient VQ codebook design. From the speech recognition experiments using the Aurora 4 database, the distributed speech recognition system employing a VQ codebook designed by the proposed method reduced the word error rate (WER) by 10% when compared with that using a VQ codebook trained with the whole training data.

  • PDF

Effects of Feedback Types on Users' Subjective Responses in a Voice User Interface (음성 사용자 인터페이스 내 피드백 유형이 사용자의 주관적 반응에 미치는)

  • Lee, Dasom;Lee, Sangwon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.219-222
    • /
    • 2017
  • This study aimed to demonstrate the effect of feedback type on users' subjective responses in a voice user interface. Feedback type is classified depend on information characteristic it involves; verification feedback and elaboration feedback. Error type is categorized as recognition error and performance error. Users' subjective assessment about system, feedback acceptance, and intention to use were measured as dependent variables. The results of experiment showed that feedback type has impacts on the subjective assessment(likeability, habitability, system response accuracy) of VUI, feedback acceptance, and intention to use. the results also demonstrated an interaction effect of feedback type and error type on the feedback acceptance. It leads to the conclusion that VUI should be designed with the elaboration feedback about error situation.

  • PDF