• Title/Summary/Keyword: Phoneme Error

Search Result 71, Processing Time 0.021 seconds

Performance Comparison of Out-Of-Vocabulary Word Rejection Algorithms in Variable Vocabulary Word Recognition (가변어휘 단어 인식에서의 미등록어 거절 알고리즘 성능 비교)

  • 김기태;문광식;김회린;이영직;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.27-34
    • /
    • 2001
  • Utterance verification is used in variable vocabulary word recognition to reject the word that does not belong to in-vocabulary word or does not belong to correctly recognized word. Utterance verification is an important technology to design a user-friendly speech recognition system. We propose a new utterance verification algorithm for no-training utterance verification system based on the minimum verification error. First, using PBW (Phonetically Balanced Words) DB (445 words), we create no-training anti-phoneme models which include many PLUs(Phoneme Like Units), so anti-phoneme models have the minimum verification error. Then, for OOV (Out-Of-Vocabulary) rejection, the phoneme-based confidence measure which uses the likelihood between phoneme model (null hypothesis) and anti-phoneme model (alternative hypothesis) is normalized by null hypothesis, so the phoneme-based confidence measure tends to be more robust to OOV rejection. And, the word-based confidence measure which uses the phoneme-based confidence measure has been shown to provide improved detection of near-misses in speech recognition as well as better discrimination between in-vocabularys and OOVs. Using our proposed anti-model and confidence measure, we achieve significant performance improvement; CA (Correctly Accept for In-Vocabulary) is about 89%, and CR (Correctly Reject for OOV) is about 90%, improving about 15-21% in ERR (Error Reduction Rate).

  • PDF

A Phoneme Separation and Learning Using of Neural Network in the On-Line Character Recognition System (신경회로망을 이용한 온라인 문자 인식 시스템의 자소 분리에 관한 연구)

  • Hong, Bong-Hwa
    • The Journal of Information Technology
    • /
    • v.9 no.1
    • /
    • pp.55-63
    • /
    • 2006
  • In this paper, a Hangul recognition system using of Kohonen Network in the phoneme separation and learning is proposed. A Hangul consists of phoneme that are consists of strokes. The phoneme recognition and separation are very important in the recognition of character. So, the phonemes which mismatching has been happened are correctly separated through the learning of neural networks. also, learning rate($\alpha$) adjusted according to error, in order to solved that its decreased the number of iteration and the problem of local minimum, adaptively.

  • PDF

CRNN-Based Korean Phoneme Recognition Model with CTC Algorithm (CTC를 적용한 CRNN 기반 한국어 음소인식 모델 연구)

  • Hong, Yoonseok;Ki, Kyungseo;Gweon, Gahgene
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.3
    • /
    • pp.115-122
    • /
    • 2019
  • For Korean phoneme recognition, Hidden Markov-Gaussian Mixture model(HMM-GMM) or hybrid models which combine artificial neural network with HMM have been mainly used. However, current approach has limitations in that such models require force-aligned corpus training data that is manually annotated by experts. Recently, researchers used neural network based phoneme recognition model which combines recurrent neural network(RNN)-based structure with connectionist temporal classification(CTC) algorithm to overcome the problem of obtaining manually annotated training data. Yet, in terms of implementation, these RNN-based models have another difficulty in that the amount of data gets larger as the structure gets more sophisticated. This problem of large data size is particularly problematic in the Korean language, which lacks refined corpora. In this study, we introduce CTC algorithm that does not require force-alignment to create a Korean phoneme recognition model. Specifically, the phoneme recognition model is based on convolutional neural network(CNN) which requires relatively small amount of data and can be trained faster when compared to RNN based models. We present the results from two different experiments and a resulting best performing phoneme recognition model which distinguishes 49 Korean phonemes. The best performing phoneme recognition model combines CNN with 3hop Bidirectional LSTM with the final Phoneme Error Rate(PER) at 3.26. The PER is a considerable improvement compared to existing Korean phoneme recognition models that report PER ranging from 10 to 12.

The Usage of Phoneme Duration Information for Rejecting Garbage Sentences (소음문장 제거를 위한 음소지속시간 사용)

  • Koo Myoung-Wan;Kim Ho-Kyoung;Park Sung-Joon;Kim Jae-In
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.219-222
    • /
    • 2003
  • In this paper, we study the usage of phoneme duration information for rejection garbage sentence. First, we build a phoneme duration modeling in a speech recognition system based on dicicion tree state tying, We assume that phone duration has a Gamma distribution. Next, we build a verification module in which word-level confidence measure is used. Finally, we make a comparative study on phoneme duration with speech DB obtained from the live system. This DB consistes of OOT(out-of-task) and ING(in-grammar) utterences. the usage of phone duration information yields that OOT recognition rate is improved by 46% and that another 8.4% error rate is reduced when combined with utterence verification module.

  • PDF

A Study on Automatic Phoneme Segmentation of Continuous Speech Using Acoustic and Phonetic Information (음향 및 음소 정보를 이용한 연속제의 자동 음소 분할에 대한 연구)

  • 박은영;김상훈;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.4-10
    • /
    • 2000
  • The work presented in this paper is about a postprocessor, which improves the performance of automatic speech segmentation system by correcting the phoneme boundary errors. We propose a postprocessor that reduces the range of errors in the auto labeled results that are ready to be used directly as synthesis unit. Starting from a baseline automatic segmentation system, our proposed postprocessor trains the features of hand labeled results using multi-layer perceptron(MLP) algorithm. Then, the auto labeled result combined with MLP postprocessor determines the new phoneme boundary. The details are as following. First, we select the feature sets of speech, based on the acoustic phonetic knowledge. And then we have adopted the MLP as pattern classifier because of its excellent nonlinear discrimination capability. Moreover, it is easy for MLP to reflect fully the various types of acoustic features appearing at the phoneme boundaries within a short time. At the last procedure, an appropriate feature set analyzed about each phonetic event is applied to our proposed postprocessor to compensate the phoneme boundary error. For phonetically rich sentences data, we have achieved 19.9 % improvement for the frame accuracy, comparing with the performance of plain automatic labeling system. Also, we could reduce the absolute error rate about 28.6%.

  • PDF

The Effect of the Number of Phoneme Clusters on Speech Recognition (음성 인식에서 음소 클러스터 수의 효과)

  • Lee, Chang-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.11
    • /
    • pp.1221-1226
    • /
    • 2014
  • In an effort to improve the efficiency of the speech recognition, we investigate the effect of the number of phoneme clusters. For this purpose, codebooks of varied number of phoneme clusters are prepared by modified k-means clustering algorithm. The subsequent processing is fuzzy vector quantization (FVQ) and hidden Markov model (HMM) for speech recognition test. The result shows that there are two distinct regimes. For large number of phoneme clusters, the recognition performance is roughly independent of it. For small number of phoneme clusters, however, the recognition error rate increases nonlinearly as it is decreased. From numerical calculation, it is found that this nonlinear regime might be modeled by a power law function. The result also shows that about 166 phoneme clusters would be the optimal number for recognition of 300 isolated words. This amounts to roughly 3 variations per phoneme.

Consonant Inventories of the Better Cochlear Implant Children in Korea (말지각 능력이 우수한 인공와우 착용 아동들의 조음 특성 : 정밀전사 분석 방법을 중심으로)

  • Chang, Son-A;Kim, Soo-Jin;Shin, Ji-Young
    • MALSORI
    • /
    • no.62
    • /
    • pp.33-49
    • /
    • 2007
  • The purpose of this study is 1) to investigate the phoneme inventories and phonological processes of cochlear implant(CI) children and 2) to describe their utterances using narrow phonetic transcription method. All ten subjects had more than 2 year-experience with CI and showed more than 85 % open-set sentence perception abilities. Average consonant accuracy was 81.36 % and it was improved up to 87.41% when distortion errors were not counted. They showed similar phonological processing patterns to HA or normal hearing children in some way as well as different phonological processing patterns from HA or normal hearing children. The prominent distortion error pattern was weakening of consonants. Every subject had his/her idiosyncratic error pattern that demanded his/her own individualized therapy program.

  • PDF

Implementation of Korean TTS System based on Natural Language Processing (자연어 처리 기반 한국어 TTS 시스템 구현)

  • Kim Byeongchang;Lee Gary Geunbae
    • MALSORI
    • /
    • no.46
    • /
    • pp.51-64
    • /
    • 2003
  • In order to produce high quality synthesized speech, it is very important to get an accurate grapheme-to-phoneme conversion and prosody model from texts using natural language processing. Robust preprocessing for non-Korean characters should also be required. In this paper, we analyzed Korean texts using a morphological analyzer, part-of-speech tagger and syntactic chunker. We present a new grapheme-to-phoneme conversion method for Korean using a hybrid method with a phonetic pattern dictionary and CCV (consonant vowel) LTS (letter to sound) rules, for unlimited vocabulary Korean TTS. We constructed a prosody model using a probabilistic method and decision tree-based method. The probabilistic method atone usually suffers from performance degradation due to inherent data sparseness problems. So we adopted tree-based error correction to overcome these training data limitations.

  • PDF

Phoneme Segmentation based on Volatility and Bulk Indicators in Korean Speech Recognition (한국어 음성 인식에서 변동성과 벌크 지표에 기반한 음소 경계 검출)

  • Lee, Jae Won
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.10
    • /
    • pp.631-638
    • /
    • 2015
  • Today, the demand for speech recognition systems in mobile environments is increasing rapidly. This paper proposes a novel method for Korean phoneme segmentation that is applicable to a phoneme based Korean speech recognition system. First, the input signal constitutes blocks of the same size. The proposed method is based on a volatility indicator calculated for each block of the input speech signal, and the bulk indicators calculated for each bulk in blocks, where a bulk is a set of adjacent samples that have the same sign as that of the primitive indicators for phoneme segmentation. The input signal vowels, voiced consonants, and voiceless consonants are sequentially recognized and the boundaries among phonemes are found using three devoted recognition algorithms that combine the two types of primitive indicators. The experimental results show that the proposed method can markedly reduce the error rate of the existing phoneme segmentation method.

Speech Recognition Performance Improvement using a convergence of GMM Phoneme Unit Parameter and Vocabulary Clustering (GMM 음소 단위 파라미터와 어휘 클러스터링을 융합한 음성 인식 성능 향상)

  • Oh, SangYeob
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.8
    • /
    • pp.35-39
    • /
    • 2020
  • DNN error is small compared to the conventional speech recognition system, DNN is difficult to parallel training, often the amount of calculations, and requires a large amount of data obtained. In this paper, we generate a phoneme unit to estimate the GMM parameters with each phoneme model parameters from the GMM to solve the problem efficiently. And it suggests ways to improve performance through clustering for a specific vocabulary to effectively apply them. To this end, using three types of word speech database was to have a DB build vocabulary model, the noise processing to extract feature with Warner filters were used in the speech recognition experiments. Results using the proposed method showed a 97.9% recognition rate in speech recognition. In this paper, additional studies are needed to improve the problems of improved over fitting.