• Title/Summary/Keyword: Speech recognition model

Search Result 624, Processing Time 0.027 seconds

Stereo Vision Neural Networks with Competition and Cooperation for Phoneme Recognition

  • Kim, Sung-Ill;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.1E
    • /
    • pp.3-10
    • /
    • 2003
  • This paper describes two kinds of neural networks for stereoscopic vision, which have been applied to an identification of human speech. In speech recognition based on the stereoscopic vision neural networks (SVNN), the similarities are first obtained by comparing input vocal signals with standard models. They are then given to a dynamic process in which both competitive and cooperative processes are conducted among neighboring similarities. Through the dynamic processes, only one winner neuron is finally detected. In a comparative study, with, the average phoneme recognition accuracy on the two-layered SVNN was 7.7% higher than the Hidden Markov Model (HMM) recognizer with the structure of a single mixture and three states, and the three-layered was 6.6% higher. Therefore, it was noticed that SVNN outperformed the existing HMM recognizer in phoneme recognition.

Integration of WFST Language Model in Pre-trained Korean E2E ASR Model

  • Junseok Oh;Eunsoo Cho;Ji-Hwan Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.6
    • /
    • pp.1692-1705
    • /
    • 2024
  • In this paper, we present a method that integrates a Grammar Transducer as an external language model to enhance the accuracy of the pre-trained Korean End-to-end (E2E) Automatic Speech Recognition (ASR) model. The E2E ASR model utilizes the Connectionist Temporal Classification (CTC) loss function to derive hypothesis sentences from input audio. However, this method reveals a limitation inherent in the CTC approach, as it fails to capture language information from transcript data directly. To overcome this limitation, we propose a fusion approach that combines a clause-level n-gram language model, transformed into a Weighted Finite-State Transducer (WFST), with the E2E ASR model. This approach enhances the model's accuracy and allows for domain adaptation using just additional text data, avoiding the need for further intensive training of the extensive pre-trained ASR model. This is particularly advantageous for Korean, characterized as a low-resource language, which confronts a significant challenge due to limited resources of speech data and available ASR models. Initially, we validate the efficacy of training the n-gram model at the clause-level by contrasting its inference accuracy with that of the E2E ASR model when merged with language models trained on smaller lexical units. We then demonstrate that our approach achieves enhanced domain adaptation accuracy compared to Shallow Fusion, a previously devised method for merging an external language model with an E2E ASR model without necessitating additional training.

Improvement of Confidence Measure Performance in Keyword Spotting using Background Model Set Algorithm (BMS 알고리즘을 이용한 핵심어 검출기 거절기능 성능 향상 실험)

  • Kim Byoung-Don;Kim Jin-Young;Choi Seung-Ho
    • MALSORI
    • /
    • no.46
    • /
    • pp.103-115
    • /
    • 2003
  • In this paper, we proposed Background Model Set algorithm used in the speaker verification to improve calculating confidence measure(CM) in speech recognition. CM is to display relative likelihood between recognized models and antiphone models. In previous method calculating of CM, we calculated probability and standard deviation using all phonemes in composition of antiphone models. At this process, antiphone CM brought bad recognition result. Also, recognition time increases. In order to solve this problem, we studied about method to reconstitute average and standard deviation using BMS algorithm in CM calculation.

  • PDF

A Study on Environment Parameter Compensation Method for Robust Speech Recognition (잡음에 강인한 음성 인식을 위한 환경 파라미터 보상에 관한 연구)

  • Hong, Mi-Jung;Lee, Ho-Woong
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.5 no.2 s.10
    • /
    • pp.1-10
    • /
    • 2006
  • In this paper, VTS(Vector Taylor Series) algorithm, which was proposed by Moreno at Carnegie Mellon University in 1996, is analyzed and simulated. VTS is considered to be one of the robust speech recognition techniques where model parameter conversion technique is adapted. To evaluation performance of the VTS algorithm, We used CMN(Cepstral Mean Normalization) technique which is one of the well-known noise processing methods. And the recognition rate is evaluated when white gaussian and street noise are employed as background noise. Also, the simulation result is analyzed in order to be compared with the previous one which was performed by Moreno.

  • PDF

User Adaptive Post-Processing in Speech Recognition for Mobile Devices (모바일 기기를 위한 음성인식의 사용자 적응형 후처리)

  • Kim, Young-Jin;Kim, Eun-Ju;Kim, Myung-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.338-342
    • /
    • 2007
  • In this paper we propose a user adaptive post-processing method to improve the accuracy of speaker dependent, isolated word speech recognition, particularly for mobile devices. Our method considers the recognition result of the basic recognizer simply as a high-level speech feature and processes it further for correct recognition result. Our method learns correlation between the output of the basic recognizer and the correct final results and uses it to correct the erroneous output of the basic recognizer. A multi-layer perceptron model is built for each incorrectly recognized word with high frequency. As the result of experiments, we achieved a significant improvement of 41% in recognition accuracy (41% error correction rate).

Frame Reliability Weighting for Robust Speech Recognition (프레임 신뢰도 가중에 의한 강인한 음성인식)

  • 조훈영;김락용;오영환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.323-329
    • /
    • 2002
  • This paper proposes a frame reliability weighting method to compensate for a time-selective noise that occurs at random positions of speech signal contaminating certain parts of the speech signal. Speech frames have different degrees of reliability and the reliability is proportional to SNR (signal-to noise ratio). While it is feasible to estimate frame Sl? by using the noise information from non-speech interval under a stationary noisy situation, it is difficult to obtain noise spectrum for a time-selective noise. Therefore, we used statistical models of clean speech for the estimation of the frame reliability. The proposed MFR (model-based frame reliability) approximates frame SNR values using filterbank energy vectors that are obtained by the inverse transformation of input MFCC (mal-frequency cepstral coefficient) vectors and mean vectors of a reference model. Experiments on various burnt noises revealed that the proposed method could represent the frame reliability effectively. We could improve the recognition performance by using MFR values as weighting factors at the likelihood calculation step.

Model adaptation employing DNN-based estimation of noise corruption function for noise-robust speech recognition (잡음 환경 음성 인식을 위한 심층 신경망 기반의 잡음 오염 함수 예측을 통한 음향 모델 적응 기법)

  • Yoon, Ki-mu;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.1
    • /
    • pp.47-50
    • /
    • 2019
  • This paper proposes an acoustic model adaptation method for effective speech recognition in noisy environments. In the proposed algorithm, the noise corruption function is estimated employing DNN (Deep Neural Network), and the function is applied to the model parameter estimation. The experimental results using the Aurora 2.0 framework and database demonstrate that the proposed model adaptation method shows more effective in known and unknown noisy environments compared to the conventional methods. In particular, the experiments of the unknown environments show 15.87 % of relative improvement in the average of WER (Word Error Rate).

Comparison of ICA Methods for the Recognition of Corrupted Korean Speech (잡음 섞인 한국어 인식을 위한 ICA 비교 연구)

  • Kim, Seon-Il
    • 전자공학회논문지 IE
    • /
    • v.45 no.3
    • /
    • pp.20-26
    • /
    • 2008
  • Two independent component analysis(ICA) algorithms were applied for the recognition of speech signals corrupted by a car engine noise. Speech recognition was performed by hidden markov model(HMM) for the estimated signals and recognition rates were compared with those of orginal speech signals which are not corrupted. Two different ICA methods were applied for the estimation of speech signals, one of which is FastICA algorithm that maximizes negentropy, the other is information-maximization approach that maximizes the mutual information between inputs and outputs to give maximum independence among outputs. Word recognition rate for the Korean news sentences spoken by a male anchor is 87.85%, while there is 1.65% drop of performance on the average for the estimated speech signals by FastICA and 2.02% by information-maximization for the various signal to noise ratio(SNR). There is little difference between the methods.

Speech Verification using Similar Word Information in Isolated Word Recognition (고립단어 인식에 유사단어 정보를 이용한 단어의 검증)

  • 백창흠;이기정홍재근
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1255-1258
    • /
    • 1998
  • Hidden Markov Model (HMM) is the most widely used method in speech recognition. In general, HMM parameters are trained to have maximum likelihood (ML) for training data. This method doesn't take account of discrimination to other words. To complement this problem, this paper proposes a word verification method by re-recognition of the recognized word and its similar word using the discriminative function between two words. The similar word is selected by calculating the probability of other words to each HMM. The recognizer haveing discrimination to each word is realized using the weighting to each state and the weighting is calculated by genetic algorithm.

  • PDF

A Study On Text Independent Speaker Recognition Using Eigenspace (고유영역을 이용한 문자독립형 화자인식에 관한 연구)

  • 함철배;이동규;이두수
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.671-674
    • /
    • 1999
  • We report the new method for speaker recognition. Until now, many researchers have used HMM (Hidden Markov Model) with cepstral coefficient or neural network for speaker recognition. Here, we introduce the method of speaker recognition using eigenspace. This method can reduce the training and recognition time of speaker recognition system. In proposed method, we use the low rank model of the speech eigenspace. In experiment, we obtain good recognition result.

  • PDF