• Title/Summary/Keyword: Phoneme Error

Search Result 71, Processing Time 0.025 seconds

Error Correction Methode Improve System using Out-of Vocabulary Rejection (미등록어 거절을 이용한 오류 보정 방법 개선 시스템)

  • Ahn, Chan-Shik;Oh, Sang-Yeob
    • Journal of Digital Convergence
    • /
    • v.10 no.8
    • /
    • pp.173-178
    • /
    • 2012
  • In the generated model for the recognition vocabulary, tri-phones which is not make preparations are produced. Therefore this model does not generate an initial estimate of parameter words, and the system can not configure the model appear as disadvantages. As a result, the sophistication of the Gaussian model is fall will degrade recognition. In this system, we propose the error correction system using out-of vocabulary rejection algorithm. When the systems are creating a vocabulary recognition model, recognition rates are improved to refuse the vocabulary which is not registered. In addition, this system is seized the lexical analysis and meaning using probability distributions, and this system deactivates the string before phoneme change was applied. System analysis determine the rate of error correction using phoneme similarity rate and reliability, system performance comparison as a result of error correction rate improve represent 2.8% by method using error patterns, fault patterns, meaning patterns.

A Study on Measuring the Speaking Rate of Speaking Signal by Using Line Spectrum Pair Coefficients

  • Jang, Kyung-A;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3E
    • /
    • pp.18-24
    • /
    • 2001
  • Speaking rate represents how many phonemes in speech signal have in limited time. It is various and changeable depending on the speakers and the characters of each phoneme. The preprocessing to remove the effect of variety of speaking rate is necessary before recognizing the speech in the present speech recognition systems. So if it is possible to estimate the speaking rate in advance, the performance of speech recognition can be higher. However, the conventional speech vocoder decides the transmission rate for analyzing the fixed period no regardless of the variety rate of phoneme but if the speaking rate can be estimated in advance, it is very important information of speech to use in speech coding part as well. It increases the quality of sound in vocoder as well as applies the variable transmission rate. In this paper, we propose the method for presenting the speaking rate as parameter in speech vocoder. To estimate the speaking rate, the variety of phoneme is estimated and the Line Spectrum Pairs is used to estimate it. As a result of comparing the speaking rate performance with the proposed algorithm and passivity method worked by eye, error between two methods is 5.38% about fast utterance and 1.78% about slow utterance and the accuracy between two methods is 98% about slow utterance and 94% about fast utterances in 30 dB SNR and 10 dB SNR respectively.

  • PDF

Speech Recognition on Korean Monosyllable using Phoneme Discriminant Filters (음소판별필터를 이용한 한국어 단음절 음성인식)

  • Hur, Sung-Phil;Chung, Hyun-Yeol;Kim, Kyung-Tae
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.1
    • /
    • pp.31-39
    • /
    • 1995
  • In this paper, we have constructed phoneme discriminant filters [PDF] according to the linear discriminant function. These discriminant filters do not follow the heuristic rules by the experts but the mathematical methods in iterative learning. Proposed system. is based on the piecewise linear classifier and error correction learning method. The segmentation of speech and the classification of phoneme are carried out simutaneously by the PDF. Because each of them operates independently, some speech intervals may have multiple outputs. Therefore, we introduce the unified coefficients by the output unification process. But sometimes the output has a region which shows no response, or insensitive. So we propose time windows and median filters to remove such problems. We have trained this system with the 549 monosyllables uttered 3 times by 3 male speakers. After we detect the endpoint of speech signal using threshold value and zero crossing rate, the vowels and consonants are separated by the PDF, and then selected phoneme passes through the following PDF. Finally this system unifies the outputs for competitive region or insensitive area using time window and median filter.

  • PDF

Effects of auditory and visual presentation on phonemic awareness in 5- to 6- year-old children (청각적 말소리 자극과 시각적 글자 자극 제시방법에 따른 5, 6세 일반아동의 음소인식 수행력 비교)

  • Kim, Myung-Heon;Ha, Ji-Wan
    • Phonetics and Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.71-80
    • /
    • 2016
  • The phonemic awareness tasks (phonemic synthesis, phonemic elision, phonemic segmentation) by auditory presentation and visual presentation were conducted to 40 children who are 5 and 6 years old. The scores and error types in the sub-tasks by two presentations were compared to each other. Also, the correlation between the performances of phonemic awareness sub-tasks in two presentation conditions were examined. As a result, 6-year-old group showed significantly higher phonemic awareness scores than 5-year-old group. Both group showed significantly higher scores in visual presentation than auditory presentation. While the performance under the visual presentation was significantly lower especially in the segmentation than the other two tasks, there was no significant difference among sub-tasks under the auditory presentation. 5-year-old group showed significantly more 'no response' errors than 6-year-old group and 6-year-old group showed significantly more 'phoneme substitution' and 'phoneme omission' errors than 5-year-old group. Significantly more 'phoneme omission' errors were observed in the segmentation than the elision task, and significantly more 'phoneme addition' errors were observed in elision than the synthesis task. Lastly, there are positive correlations in auditory and visual synthesis tasks, auditory and visual elision tasks, and auditory and visual segmentation tasks. Summarizing the results, children tend to depend on orthographic knowledge when acquiring the initial phonemic awareness. Therefore, the result of this research would support the position that the orthographic knowledge affects the improvement of phonemic awareness.

Feature Extraction Based on Speech Attractors in the Reconstructed Phase Space for Automatic Speech Recognition Systems

  • Shekofteh, Yasser;Almasganj, Farshad
    • ETRI Journal
    • /
    • v.35 no.1
    • /
    • pp.100-108
    • /
    • 2013
  • In this paper, a feature extraction (FE) method is proposed that is comparable to the traditional FE methods used in automatic speech recognition systems. Unlike the conventional spectral-based FE methods, the proposed method evaluates the similarities between an embedded speech signal and a set of predefined speech attractor models in the reconstructed phase space (RPS) domain. In the first step, a set of Gaussian mixture models is trained to represent the speech attractors in the RPS. Next, for a new input speech frame, a posterior-probability-based feature vector is evaluated, which represents the similarity between the embedded frame and the learned speech attractors. We conduct experiments for a speech recognition task utilizing a toolkit based on hidden Markov models, over FARSDAT, a well-known Persian speech corpus. Through the proposed FE method, we gain 3.11% absolute phoneme error rate improvement in comparison to the baseline system, which exploits the mel-frequency cepstral coefficient FE method.

A knowledge-based pronunciation generation system for French (지식 기반 프랑스어 발음열 생성 시스템)

  • Kim, Sunhee
    • Phonetics and Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.49-55
    • /
    • 2018
  • This paper aims to describe a knowledge-based pronunciation generation system for French. It has been reported that a rule-based pronunciation generation system outperforms most of the data-driven ones for French; however, only a few related studies are available due to existing language barriers. We provide basic information about the French language from the point of view of the relationship between orthography and pronunciation, and then describe our knowledge-based pronunciation generation system, which consists of morphological analysis, Part-of-Speech (POS) tagging, grapheme-to-phoneme generation, and phone-to-phone generation. The evaluation results show that the word error rate of POS tagging, based on a sample of 1,000 sentences, is 10.70% and that of phoneme generation, using 130,883 entries, is 2.70%. This study is expected to contribute to the development and evaluation of speech synthesis or speech recognition systems for French.

Performance Evaluation of English Word Pronunciation Correction System (한국인을 위한 외국어 발음 교정 시스템의 개발 및 성능 평가)

  • Kim Mu Jung;Kim Hyo Sook;Kim Sun Ju;Kim Byoung Gi;Ha Jin-Young;Kwon Chul Hong
    • MALSORI
    • /
    • no.46
    • /
    • pp.87-102
    • /
    • 2003
  • In this paper, we present an English pronunciation correction system for Korean speakers and show some of experimental results on it. The aim of the system is to detect mispronounced phonemes in spoken words and to give appropriate correction comments to users. There are several English pronunciation correction systems adopting speech recognition technology, however, most of them use conventional speech recognition engines. From this reason, they could not give phoneme based correction comments to users. In our system, we build two kinds of phoneme models: standard native speaker models and Korean's error models. We also design recognition network based on phonemes to detect Koreans' common mispronunciations. We get 90% detection rate in insertion/deletion/replacement of phonemes, but we cannot get high detection rate in diphthong split and accents.

  • PDF

Corpus Based Unrestricted vocabulary Mandarin TTS (코퍼스 기반 무제한 단어 중국어 TTS)

  • Yu Zheng;Ha Ju-Hong;Kim Byeongchang;Lee Gary Geunbae
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.175-179
    • /
    • 2003
  • In order to produce a high quality (intelligibility and naturalness) synthesized speech, it is very important to get an accurate grapheme-to-phoneme conversion and prosody model. In this paper, we analyzed Chinese texts using a segmentation, POS tagging and unknown word recognition. We present a grapheme-to-phoneme conversion using a dictionary-based and rule-based method. We constructed a prosody model using a probabilistic method and a decision tree-based error correction method. According to the result from the above analysis, we can successfully select and concatenate exact synthesis unit of syllables from the Chinese Synthesis DB.

  • PDF

Korean speech recognition based on grapheme (문자소 기반의 한국어 음성인식)

  • Lee, Mun-hak;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.5
    • /
    • pp.601-606
    • /
    • 2019
  • This paper is a study on speech recognition in the Korean using grapheme unit (Cho-sumg [onset], Jung-sung [nucleus], Jong-sung [coda]). Here we make ASR (Automatic speech recognition) system without G2P (Grapheme to Phoneme) process and show that Deep learning based ASR systems can learn Korean pronunciation rules without G2P process. The proposed model is shown to reduce the word error rate in the presence of sufficient training data.

A comparison of phonological error patterns in the single word and spontaneous speech of children with speech sound disorders (말소리장애 아동의 단어와 자발화 문맥의 음운오류패턴 비교)

  • Park, kayeon;Kim, Soo-Jin
    • Phonetics and Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.165-173
    • /
    • 2015
  • This study was aim to compare the phonological error patterns and PCC(Percentage of Correct Consonants) derived from the single word and spontaneous speech contexts of the speech sound disorders with unknown origin(SSD). The present study suggest that the development phonological error patterns and non-developmental error patterns of the target children, in according to speech context. The subjects were 15 children with SSD up to the age of 5 from 3 years of age. This research use 37 words of APAC(Assessment of Phonology & Articulation for Children) in the single word context and 100 eojeol in the spontaneous speech context. There was no difference of PCC between the single word and the spontaneous speech contexts. Significantly different developmental phonological error patterns between the single word and the spontaneous speech contexts were syllable deletion, word-medial onset deletion, liquid deletion, gliding, affrication, fricative other error, tensing, regressive assimilation. Significantly different non-developmental phonological error patterns were backing, addtion of phoneme, aspirating. The study showed that there was no difference of PCC between elicited single word and spontaneous conversational context. And there were some different phonological error patterns derived from the two contexts of the speech sound disorders. The more important interventions target is the error patterns of the spontaneous speech contexts for the immediate generalization and rising overall intelligibility.