• Title/Summary/Keyword: Phonetic Complexity

Search Result 14, Processing Time 0.027 seconds

Effects of Phonetic Complexity and Articulatory Severity on Percentage of Correct Consonant and Speech Intelligibility in Adults with Dysarthria (조음복잡성 및 조음중증도에 따른 마비말장애인의 자음정확도와 말명료도)

  • Song, HanNae;Lee, Youngmee;Sim, HyunSub;Sung, JeeEun
    • Phonetics and Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.39-46
    • /
    • 2013
  • This study examined the effects of phonetic complexity and articulatory severity on Percentage of Correct Consonant (PCC) and speech intelligibility in adults with dysarthria. Speech samples of thirty-two words from APAC (Assessment of Phonology and Articulation of Children) were collected from 38 dysarthric speakers with one of two different levels of articulatory severities (mild or mild-moderate). A PCC and speech intelligibility score was calculated by the 4 levels of phonetic complexity. Two-way mixed ANOVA analysis revealed: (1) the group with mild severity showed significantly higher PCC and speech intelligibility scores than the mild-moderate articulatory severity group, (2) PCC at the phonetic complexity level 4 was significantly lower than those at the other levels and (3) an interaction effect of articulatory severity and phonetic complexity was observed only on the PCC. Pearson correlation analysis demonstrated the degree of correlation between PCC and speech intelligibility varied depending on the level of articulatory severity and phonetic complexity. The clinical implications of the findings were discussed.

Acoustic Duration of Consonants and Words by Phonetic Complexity in Children with Functional Articulation and Phonological Disorders (기능적 조음음운장애 아동의 조음복잡성에 따른 자음과 단어의 음향학적 길이)

  • Kang, Eun-Yeong
    • Journal of The Korean Society of Integrative Medicine
    • /
    • v.9 no.4
    • /
    • pp.167-181
    • /
    • 2021
  • Purpose : This study was conducted to investigate whether phonetic complexity affected the type and frequency of articulation errors and the acoustic duration of consonants and words produced by children with functional articulation and phonological disorders. Methods : The participants in this study were 20 children with functional articulation and phonological disorders and 20 children without such disorders who were between 3 years 7 months old and 4 years 11 months old. The participants were asked to name what they saw in pictures and their responses were recorded. The types and frequencies of articulation errors and the acoustic duration of words were analyzed and words were categorized as being of either 'high' or 'low' phonetic complexity. The acoustic duration of initial and final consonants and vowels following initial consonants were compared between the groups according to articulatory complexity. Results : Children with functional articulation and phonological disorders produced articulation errors more frequently when saying high complexity words and had longer word duration when saying low-complexity words than children without such disorders. There was no major difference in initial and final consonant duration between the groups. but the main effect of articulatory complexity on the duration of both consonants was significant. Conclusion : These results suggest that the articulatory-phonic structure of words influences the speech motor control ability of children with functional articulation and phonological disorders. When articulating consonants, children with functional articulation and phonological disorders had speech motor control ability similar to that of children without such disorders.

Speaker Identification using Phonetic GMM (음소별 GMM을 이용한 화자식별)

  • Kwon Sukbong;Kim Hoi-Rin
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.185-188
    • /
    • 2003
  • In this paper, we construct phonetic GMM for text-independent speaker identification system. The basic idea is to combine of the advantages of baseline GMM and HMM. GMM is more proper for text-independent speaker identification system. In text-dependent system, HMM do work better. Phonetic GMM represents more sophistgate text-dependent speaker model based on text-independent speaker model. In speaker identification system, phonetic GMM using HMM-based speaker-independent phoneme recognition results in better performance than baseline GMM. In addition to the method, N-best recognition algorithm used to decrease the computation complexity and to be applicable to new speakers.

  • PDF

Approximated Posterior Probability for Scoring Speech Recognition Confidence

  • Kim Kyuhong;Kim Hoirin
    • MALSORI
    • /
    • no.52
    • /
    • pp.101-110
    • /
    • 2004
  • This paper proposes a new confidence measure for utterance verification with posterior probability approximation. The proposed method approximates probabilistic likelihoods by using Viterbi search characteristics and a clustered phoneme confusion matrix. Our measure consists of the weighted linear combination of acoustic and phonetic confidence scores. The proposed algorithm shows better performance even with the reduced computational complexity than those utilizing conventional confidence measures.

  • PDF

A Study of Establishing Standard for Selecting Musical Materials for Teaching English Pronunciation in Elementary School (초등학교 영어 발음교육을 위한 음악자료 선정 기준 설정 연구)

  • Hong Kyungsuk
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.60-63
    • /
    • 2002
  • The purpose of this paper is to establish standard for selecting musical materials for teaching English pronunciation in elementary school. For this purpose, 110 songs of wee sings and the curricula of English and Music of elementary school are analyzed, and the results became the basis of establishing standards. Four standards are established and each standard divided into several steps according to the degree of complexity. And the degree of complexity of two songs are figured out to know the possibility of application.

  • PDF

Fast Time-Scale Modification of Speech Using Nonlinear Clipping Methods

  • Jung, Ho-Young;Kim, Hyung-Soon;Lee, Sung-Joo
    • MALSORI
    • /
    • no.59
    • /
    • pp.69-87
    • /
    • 2006
  • Among the conventional time-scale modification (TSM) methods, the synchronized overlap and add (SOLA) method is widely used due to its good performance relative to computational complexity But the SOLA method remains complex due to its synchronization procedure using the normalized cross-correlation function. In this paper, we introduce a computationally efficient SOLA method utilizing 3 level center clipping method, as well as zero-crossing and level-crossing information. The result of subjective preference test indicates that the proposed method can reduce the computational complexity by over 80% compared with the conventional SOLA method without serious degradation of synthesized speech quality.

  • PDF

A study on extraction of the frames representing each phoneme in continuous speech (연속음에서의 각 음소의 대표구간 추출에 관한 연구)

  • 박찬응;이쾌희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.4
    • /
    • pp.174-182
    • /
    • 1996
  • In continuous speech recognition system, it is possible to implement the system which can handle unlimited number of words by using limited number of phonetic units such as phonemes. Dividing continuous speech into the string of tems of phonemes prior to recognition process can lower the complexity of the system. But because of the coarticulations between neiboring phonemes, it is very difficult ot extract exactly their boundaries. In this paper, we propose the algorithm ot extract short terms which can represent each phonemes instead of extracting their boundaries. The short terms of lower spectral change and higher spectral chang eare detcted. Then phoneme changes are detected using distance measure with this lower spectral change terms, and hgher spectral change terms are regarded as transition terms or short phoneme terms. Finally lower spectral change terms and the mid-term of higher spectral change terms are regarded s the represent each phonemes. The cepstral coefficients and weighted cepstral distance are used for speech feature and measuring the distance because of less computational complexity, and the speech data used in this experimetn was recoreded at silent and ordinary in-dorr environment. Through the experimental results, the proposed algorithm showed higher performance with less computational complexity comparing with the conventional segmetnation algorithms and it can be applied usefully in phoneme-based continuous speech recognition.

  • PDF

Implementation of Wideband Waveform Interpolation Coder for TTS DB Compression (TTS DB 압축을 위한 광대역 파형보간 부호기 구현)

  • Yang, Hee-Sik;Hahn, Min-Soo
    • MALSORI
    • /
    • v.55
    • /
    • pp.143-158
    • /
    • 2005
  • The adequate compression algorithm is essential to achieve high quality embedded TTS system. in this paper, we Propose waveform interpolation coder for TTS corpus compression after many speech coder investigation. Unlike speech coders in communication system, compression rate and anality are more important factors in TTS DB compression than other performance criteria. Thus we select waveform interpolation algorithm because it provides good speech quality under high compression rate at the cost of complexity. The implemented coder has bit rate 6kbps with quality degradation 0.47. The performance indicates that the waveform interpolation is adequate for TTS DB compression with some further study.

  • PDF

A Study on the Voice Conversion with HMM-based Korean Speech Synthesis (HMM 기반의 한국어 음성합성에서 음색변환에 관한 연구)

  • Kim, Il-Hwan;Bae, Keun-Sung
    • MALSORI
    • /
    • v.68
    • /
    • pp.65-74
    • /
    • 2008
  • A statistical parametric speech synthesis system based on the hidden Markov models (HMMs) has grown in popularity over the last few years, because it needs less memory and low computation complexity and is suitable for the embedded system in comparison with a corpus-based unit concatenation text-to-speech (TTS) system. It also has the advantage that voice characteristics of the synthetic speech can be modified easily by transforming HMM parameters appropriately. In this paper, we present experimental results of voice characteristics conversion using the HMM-based Korean speech synthesis system. The results have shown that conversion of voice characteristics could be achieved using a few sentences uttered by a target speaker. Synthetic speech generated from adapted models with only ten sentences was very close to that from the speaker dependent models trained using 646 sentences.

  • PDF

Performance Improvement of Korean Connected Digit Recognition Using Various Discriminant Analyses (다양한 변별분석을 통한 한국어 연결숫자 인식 성능향상에 관한 연구)

  • Song Hwa Jeon;Kim Hyung Soon
    • MALSORI
    • /
    • no.44
    • /
    • pp.105-113
    • /
    • 2002
  • In Korean, each digit is monosyllable and some pairs are known to have high confusability, causing performance degradation of connected digit recognition systems. To improve the performance, in this paper, we employ various discriminant analyses (DA) including Linear DA (LDA), Weighted Pairwise Scatter LDA WPS-LDA), Heteroscedastic Discriminant Analysis (HDA), and Maximum Likelihood Linear Transformation (MLLT). We also examine several combinations of various DA for additional performance improvement. Experimental results show that applying any DA mentioned above improves the string accuracy, but the amount of improvement of each DA method varies according to the model complexity or number of mixtures per state. Especially, more than 20% of string error reduction is achieved by applying MLLT after WPS-LDA, compared with the baseline system, when class level of DA is defined as a tied state and 1 mixture per state is used.

  • PDF