• Title/Summary/Keyword: Phoneme Error

Search Result 71, Processing Time 0.021 seconds

The Error Pattern Analysis of the HMM-Based Automatic Phoneme Segmentation (HMM기반 자동음소분할기의 음소분할 오류 유형 분석)

  • Kim Min-Je;Lee Jung-Chul;Kim Jong-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.5
    • /
    • pp.213-221
    • /
    • 2006
  • Phone segmentation of speech waveform is especially important for concatenative text to speech synthesis which uses segmented corpora for the construction of synthetic units. because the quality of synthesized speech depends critically on the accuracy of the segmentation. In the beginning. the phone segmentation was manually performed. but it brings the huge effort and the large time delay. HMM-based approaches adopted from automatic speech recognition are most widely used for automatic segmentation in speech synthesis, providing a consistent and accurate phone labeling scheme. Even the HMM-based approach has been successful, it may locate a phone boundary at a different position than expected. In this paper. we categorized adjacent phoneme pairs and analyzed the mismatches between hand-labeled transcriptions and HMM-based labels. Then we described the dominant error patterns that must be improved for the speech synthesis. For the experiment. hand labeled standard Korean speech DB from ETRI was used as a reference DB. Time difference larger than 20ms between hand-labeled phoneme boundary and auto-aligned boundary is treated as an automatic segmentation error. Our experimental results from female speaker revealed that plosive-vowel, affricate-vowel and vowel-liquid pairs showed high accuracies, 99%, 99.5% and 99% respectively. But stop-nasal, stop-liquid and nasal-liquid pairs showed very low accuracies, 45%, 50% and 55%. And these from male speaker revealed similar tendency.

A Statistical Approach to Phoneme Segmentation through Multi-step Compensation (다단계 보상 기능을 갖는 통계적 방법에 의한 음소 분할)

  • 김홍국;이황수;은종관
    • The Journal of the Acoustical Society of Korea
    • /
    • v.10 no.5
    • /
    • pp.69-76
    • /
    • 1991
  • 본 논문에서는 통계적 방법에 의한 음소의 자동분할에 관한 알고리즘을 제안하였다. 우선 음성 신호를 AR 모델로 모델링한 후 스펙트럼이 변화하기 전과 변화한 후의 모델에 대해서 likelihood ratio 와 mutual information을 고려한 test statistics 로부터 모델 계수가 변화하는 곳을 예측해 내고 이 곳을 음소의 경계로 판단한다. 이 경우 검파되지 못하는 대부분의 음소는 짧은 자음이었으며 Signed front-to-back maximum area ratio을 이용하여 개선하였다. 또한 false alarm error을 줄이기 위해 두 segment 사이의 distortion 으로부터 smoothing을 하였다. 3명의 화자에 대한 실험 결과 non-detection error는 10%, false alarm error는 20% 정도로 나타났지만 화자간에 알고리즘의 성능 변화가 거의 없으 며 특히 분할된 경계치 분포는 전체 음소의 90% 이상이 이 30ms 이내에 위치하였다.

  • PDF

Phonological Error Patterns: Clinical Aspects on Coronal Feature (음운 오류 패턴: 설정성 자질의 임상적 고찰)

  • Kim, Min-Jung;Lee, Sung-Eun
    • Phonetics and Speech Sciences
    • /
    • v.2 no.4
    • /
    • pp.239-244
    • /
    • 2010
  • The purpose of this study is to investigate two phonological error patterns on coronal feature of children with functional articulation disorders and to compare them with those of general children. We tested 120 children with functional articulation disorders and 100 general children from 2~4 years of age with 'Assessment of Phonology & Articulation for Chidren(APAC)'. The results were as follows: (1) 37 disordered children substituted [+coronal] consonants for [-coronal] consonants (fronting of velars) and 9 disordered children substituted [-coronal] consonants for [+coronal] consonants (backing to velars). (2) Theses two phonological patterns were affected by the articulatory place of following phoneme. (3) The fronting pattern of children with articulation disorders was similar with that of general children, but their backing pattern was different with that of general children. These results show the clinical usefulness of coronal feature in phonological pattern analysis, the need of articulatory assessment with various phonetic context, and the importance of error contexts in clinical judgment.

  • PDF

Pronunciation Dictionary for English Pronunciation Tutoring System (영어 발음교정시스템을 위한 발음사전 구축)

  • Kim Hyosook;Kim Sunju
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.168-171
    • /
    • 2003
  • This study is about modeling pronunciation dictionary necessary for PLU(phoneme like unit) level word recognition. The recognition of nonnative speakers' pronunciation enables an automatic diagnosis and an error detection which are the core of English pronunciation tutoring system. The above system needs two pronunciation dictionaries. One is for representing standard English pronunciation. The other is for representing Korean speakers' English Pronunciation. Both dictionaries are integrated to generate pronunciation networks for variants.

  • PDF

Speech Recognition of Korean Phonemes 'ㅅ', 'ㅈ', 'ㅊ' based on Volatility and Turning Points (변동성과 전환점에 기반한 한국어 음소 'ㅅ', 'ㅈ', 'ㅊ' 음성 인식)

  • Lee, Jae Won
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.11
    • /
    • pp.579-585
    • /
    • 2014
  • A phoneme is the minimal unit of speech, and it plays a very important role in speech recognition. This paper proposes a novel method that can be used to recognize 'ㅅ', 'ㅈ', and 'ㅊ' among Korean phonemes. The proposed method is based on a volatility indicator and a turning point indicator that are calculated for each constituting block of the input speech signal. The volatility indicator is the sum of the differences between the values of each two samples adjacent in a block, and the turning point indicator is the number of extremal points at which the direction of the increment or decrement of the values of the sample are inverted in a block. A phoneme recognition algorithm combines the two indicators to finally determine the positions at which the three target phonemes mentioned above are recognized by utilizing optimized thresholds related with those indicators. The experimental results show that the proposed method can markedly reduce the error rate of the existing methods both in terms of the false reject rate and the false accept rate.

Phoneme Recognition and Error in Postlingually Deafened Adults with Cochlear Implantation (언어습득 이후 난청 성인 인공와우이식자의 음소 지각과 오류)

  • Choi, A.H.;Heo, S.D.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.8 no.3
    • /
    • pp.227-232
    • /
    • 2014
  • The aim of this study was to investigate phoneme recognition in postlingually deafened adults with cochlear implantation. 21-cochlear implantee were participated. They was used cochlear implants more than 1 year. In order to measure consonant performance abilities, subjects were asked for 18 items of Korean consonants in a "aCa" condition with audition alone. The scores ranged from 11 to 86 ($60{\pm}17$)%. The consonant performance abilities correlated with implanted hearing threshold level, significantly (p<.046). This results suggest that consonant performance abilities of postlingual deafened adults cochlear implantee be important for implanted hearing. They had higher correct rates for fricatives and affricatives with distinctive frequency bands than for plosives, liquids & nasals with the same or adjacent frequency bands. All subjects had confusion patterns among the consonants of the same manner of articulation. The reason of consonant confusions was caused that they couldn't recognize different intensities and durations of consonants with the same or adjacent frequency bands.

  • PDF

Phonetic Transcription based Speech Recognition using Stochastic Matching Method (확률적 매칭 방법을 사용한 음소열 기반 음성 인식)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.5
    • /
    • pp.696-700
    • /
    • 2007
  • A new method that improves the performance of the phonetic transcription based speech recognition system is presented with the speaker-independent phonetic recognizer. Since SI phoneme HMM based speech recognition system uses only the phoneme transcription of the input sentence, the storage space could be reduced greatly. However, the performance of the system is worse than that of the speaker dependent system due to the phoneme recognition errors generated from using SI models. A new training method that iteratively estimates the phonetic transcription and transformation vectors is presented to reduce the mismatch between the training utterances and a set of SI models using speaker adaptation techniques. For speaker adaptation the stochastic matching methods are used to estimate the transformation vectors. The experiments performed over actual telephone line shows that a reduction of about 45% in the error rates could be achieved as compared to the conventional method.

Acoustic and Pronunciation Model Adaptation Based on Context dependency for Korean-English Speech Recognition (한국인의 영어 인식을 위한 문맥 종속성 기반 음향모델/발음모델 적응)

  • Oh, Yoo-Rhee;Kim, Hong-Kook;Lee, Yeon-Woo;Lee, Seong-Ro
    • MALSORI
    • /
    • v.68
    • /
    • pp.33-47
    • /
    • 2008
  • In this paper, we propose a hybrid acoustic and pronunciation model adaptation method based on context dependency for Korean-English speech recognition. The proposed method is performed as follows. First, in order to derive pronunciation variant rules, an n-best phoneme sequence is obtained by phone recognition. Second, we decompose each rule into a context independent (CI) or a context dependent (CD) one. To this end, it is assumed that a different phoneme structure between Korean and English makes CI pronunciation variabilities while coarticulation effects are related to CD pronunciation variabilities. Finally, we perform an acoustic model adaptation and a pronunciation model adaptation for CI and CD pronunciation variabilities, respectively. It is shown from the Korean-English speech recognition experiments that the average word error rate (WER) is decreased by 36.0% when compared to the baseline that does not include any adaptation. In addition, the proposed method has a lower average WER than either the acoustic model adaptation or the pronunciation model adaptation.

  • PDF

Improvement of an Automatic Segmentation for TTS Using Voiced/Unvoiced/Silence Information (유/무성/묵음 정보를 이용한 TTS용 자동음소분할기 성능향상)

  • Kim Min-Je;Lee Jung-Chul;Kim Jong-Jin
    • MALSORI
    • /
    • no.58
    • /
    • pp.67-81
    • /
    • 2006
  • For a large corpus of time-aligned data, HMM based approaches are most widely used for automatic segmentation, providing a consistent and accurate phone labeling scheme. There are two methods for training in HMM. Flat starting method has a property that human interference is minimized but it has low accuracy. Bootstrap method has a high accuracy, but it has a defect that manual segmentation is required In this paper, a new algorithm is proposed to minimize manual work and to improve the performance of automatic segmentation. At first phase, voiced, unvoiced and silence classification is performed for each speech data frame. At second phase, the phoneme sequence is aligned dynamically to the voiced/unvoiced/silence sequence according to the acoustic phonetic rules. Finally, using these segmented speech data as a bootstrap, phoneme model parameters based on HMM are trained. For the performance test, hand labeled ETRI speech DB was used. The experiment results showed that our algorithm achieved 10% improvement of segmentation accuracy within 20 ms tolerable error range. Especially for the unvoiced consonants, it showed 30% improvement.

  • PDF

Korean Phoneme Recognition Model with Deep CNN (Deep CNN 기반의 한국어 음소 인식 모델 연구)

  • Hong, Yoon Seok;Ki, Kyung Seo;Gweon, Gahgene
    • Annual Conference of KIPS
    • /
    • 2018.05a
    • /
    • pp.398-401
    • /
    • 2018
  • 본 연구에서는 심충 합성곱 신경망(Deep CNN)과 Connectionist Temporal Classification (CTC) 알고리즘을 사용하여 강제정렬 (force-alignment)이 이루어진 코퍼스 없이도 학습이 가능한 음소 인식 모델을 제안한다. 최근 해외에서는 순환 신경망(RNN)과 CTC 알고리즘을 사용한 딥 러닝 기반의 음소 인식 모델이 활발히 연구되고 있다. 하지만 한국어 음소 인식에는 HMM-GMM 이나 인공 신경망과 HMM 을 결합한 하이브리드 시스템이 주로 사용되어 왔으며, 이 방법 은 최근의 해외 연구 사례들보다 성능 개선의 여지가 적고 전문가가 제작한 강제정렬 코퍼스 없이는 학습이 불가능하다는 단점이 있다. 또한 RNN 은 학습 데이터가 많이 필요하고 학습이 까다롭다는 단점이 있어, 코퍼스가 부족하고 기반 연구가 활발하게 이루어지지 않은 한국어의 경우 사용에 제약이 있다. 이에 본 연구에서는 강제정렬 코퍼스를 필요로 하지 않는 CTC 알고리즘을 도입함과 동시에, RNN 에 비해 더 학습 속도가 빠르고 더 적은 데이터로도 학습이 가능한 합성곱 신경망(CNN)을 사용하여 딥 러닝 모델을 구축하여 한국어 음소 인식을 수행하여 보고자 하였다. 이 모델을 통해 본 연구에서는 한국어에 존재하는 49 가지의 음소를 추출하는 세 종류의 음소 인식기를 제작하였으며, 최종적으로 선정된 음소 인식 모델의 PER(phoneme Error Rate)은 9.44 로 나타났다. 선행 연구 사례와 간접적으로 비교하였을 때, 이 결과는 제안하는 모델이 기존 연구 사례와 대등하거나 조금 더 나은 성능을 보인다고 할 수 있다.