• Title/Summary/Keyword: Speech

Search Result 7,753, Processing Time 0.034 seconds

Acoustic-Phonetic Phenotypes in Pediatric Speech Disorders;An Interdisciplinary Approach

  • Bunnell, H. Timothy
    • Proceedings of the KSPS conference
    • /
    • 2006.11a
    • /
    • pp.31-36
    • /
    • 2006
  • Research in the Center for Pediatric Auditory and Speech Sciences (CPASS) is attempting to characterize or phenotype children with speech delays based on acoustic-phonetic evidence and relate those phenotypes to chromosome loci believed to be related to language and speech. To achieve this goal we have adopted a highly interdisciplinary approach that merges fields as diverse as automatic speech recognition, human genetics, neuroscience, epidemiology, and speech-language pathology. In this presentation I will trace the background of this project, and the rationale for our approach. Analyses based on a large amount of speech recorded from 18 children with speech delays will be presented to illustrate the approach we will be taking to characterize the acoustic phonetic properties of disordered speech in young children. The ultimate goal of our work is to develop non-invasive and objective measures of speech development that can be used to better identify which children with apparent speech delays are most in need of, or would receive the most benefit from the delivery of therapeutic services.

  • PDF

Rhythmic Differences between Spontaneous and Read Speech of English

  • Kim, Sul-Ki;Jang, Tae-Yeoub
    • Phonetics and Speech Sciences
    • /
    • v.1 no.3
    • /
    • pp.49-55
    • /
    • 2009
  • This study investigates whether rhythm metrics can be used to capture the rhythmic differences between spontaneous and read English speech. Transcription of spontaneous speech tokens extracted from a corpus is read by three English native speakers to generate the corresponding read speech tokens. Two data sets are compared in terms of seven rhythm measures that are suggested by previous studies. Results show that there is a significant difference in the values of vowel-based metrics (VarcoV and nPVI-V) between spontaneous and read speech. This manifests a greater variability in vocalic intervals in spontaneous speech than in read speech. The current study is especially meaningful as it demonstrates a way in which speech styles can be differentiated and parameterized in numerical terms.

  • PDF

Noise Robust Speech Recognition Based on Noisy Speech Acoustic Model Adaptation (잡음음성 음향모델 적응에 기반한 잡음에 강인한 음성인식)

  • Chung, Yongjoo
    • Phonetics and Speech Sciences
    • /
    • v.6 no.2
    • /
    • pp.29-34
    • /
    • 2014
  • In the Vector Taylor Series (VTS)-based noisy speech recognition methods, Hidden Markov Models (HMM) are usually trained with clean speech. However, better performance is expected by training the HMM with noisy speech. In a previous study, we could find that Minimum Mean Square Error (MMSE) estimation of the training noisy speech in the log-spectrum domain produce improved recognition results, but since the proposed algorithm was done in the log-spectrum domain, it could not be used for the HMM adaptation. In this paper, we modify the previous algorithm to derive a novel mathematical relation between test and training noisy speech in the cepstrum domain and the mean and covariance of the Multi-condition TRaining (MTR) trained noisy speech HMM are adapted. In the noisy speech recognition experiments on the Aurora 2 database, the proposed method produced 10.6% of relative improvement in Word Error Rates (WERs) over the MTR method while the previous MMSE estimation of the training noisy speech produced 4.3% of relative improvement, which shows the superiority of the proposed method.

Korean speakers hyperarticulate vowels in polite speech

  • Oh, Eunhae;Winter, Bodo;Idemaru, Kaori
    • Phonetics and Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.15-20
    • /
    • 2021
  • In line with recent attention to the multimodal expression of politeness, the present study examined the association between polite speech and acoustic features through the analysis of vowels produced in casual and polite speech contexts in Korean. Fourteen adult native speakers of Seoul Korean produced the utterances in two social conditions to elicit polite (professor) and casual (friend) speech. Vowel duration and the first (F1) and second formants (F2) of seven sentence- and phrase-initial monophthongs were measured. The results showed that polite speech shares acoustic similarities with vowel production in clear speech: speakers showed greater vowel space expansion in polite than casual speech in an effort to enhance perceptual intelligibility. Especially, female speakers hyperarticulated (front) vowels for polite speech, independent of speech rate. The implications for the acoustic encoding of social stance in polite speech are further discussed.

Real-time implementation and performance evaluation of speech classifiers in speech analysis-synthesis

  • Kumar, Sandeep
    • ETRI Journal
    • /
    • v.43 no.1
    • /
    • pp.82-94
    • /
    • 2021
  • In this work, six voiced/unvoiced speech classifiers based on the autocorrelation function (ACF), average magnitude difference function (AMDF), cepstrum, weighted ACF (WACF), zero crossing rate and energy of the signal (ZCR-E), and neural networks (NNs) have been simulated and implemented in real time using the TMS320C6713 DSP starter kit. These speech classifiers have been integrated into a linear-predictive-coding-based speech analysis-synthesis system and their performance has been compared in terms of the percentage of the voiced/unvoiced classification accuracy, speech quality, and computation time. The results of the percentage of the voiced/unvoiced classification accuracy and speech quality show that the NN-based speech classifier performs better than the ACF-, AMDF-, cepstrum-, WACF- and ZCR-E-based speech classifiers for both clean and noisy environments. The computation time results show that the AMDF-based speech classifier is computationally simple, and thus its computation time is less than that of other speech classifiers, while that of the NN-based speech classifier is greater compared with other classifiers.

Robust Distributed Speech Recognition under noise environment using MESS and EH-VAD (멀티밴드 스펙트럼 차감법과 엔트로피 하모닉을 이용한 잡음환경에 강인한 분산음성인식)

  • Choi, Gab-Keun;Kim, Soon-Hyob
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.1
    • /
    • pp.101-107
    • /
    • 2011
  • The background noises and distortions by channel are major factors that disturb the practical use of speech recognition. Usually, noise reduce the performance of speech recognition system DSR(Distributed Speech Recognition) based speech recognition also bas difficulty of improving performance for this reason. Therefore, to improve DSR-based speech recognition under noisy environment, this paper proposes a method which detects accurate speech region to extract accurate features. The proposed method distinguish speech and noise by using entropy and detection of spectral energy of speech. The speech detection by the spectral energy of speech shows good performance under relatively high SNR(SNR 15dB). But when the noise environment varies, the threshold between speech and noise also varies, and speech detection performance reduces under low SNR(SNR 0dB) environment. The proposed method uses the spectral entropy and harmonics of speech for better speech detection. Also, the performance of AFE is increased by precise speech detections. According to the result of experiment, the proposed method shows better recognition performance under noise environment.

Speech Outcomes in 5-Year-Old Korean Children with Bilateral Cleft Lip and Palate

  • Kyung S. Koh;Seungeun Jung;Bo Ra Park;Tae-Suk Oh;Young Chul Kim;Seunghee Ha
    • Archives of Plastic Surgery
    • /
    • v.51 no.1
    • /
    • pp.80-86
    • /
    • 2024
  • Background Among the cleft types, bilateral cleft lip and palate (BCLP) generally requires multiple surgical procedures and extended speech therapy to achieve normal speech development. This study aimed to describe speech outcomes in 5-year-old Korean children with BCLP and examine whether normal speech could be achieved before starting school. Methods The retrospective study analyzed 52 children with complete BCLP who underwent primary palatal surgery at a tertiary medical center. Three speech-language pathologists made perceptual judgments on recordings from a speech follow-up assessment of 5-year-old children. They assessed the children's speech in terms of articulation, speech intelligibility, resonance, and voice using the Cleft Audit Protocol for Speech-Augmented-Korean Modification. Results The results indicated that at the age of five, 65 to 70% of children with BCLP presented articulation and resonance within normal or acceptable ranges. Further, seven children with BCLP (13.5%) needed both additional speech therapy and palatal surgery for persistent velopharyngeal insufficiency and speech problems even at the age of five. Conclusion This study confirmed that routine follow-up speech assessments are essential as a substantial number of children with BCLP require secondary surgical procedures and extended speech therapy to achieve normal speech development.

A Comparative Study on Speech Rate Variation between Japanese/Chinese Learners of Korean and Native Korean (학습자의 발화 속도 변이 연구: 일본인과 중국인 한국어 학습자와 한국어 모어 화자 비교)

  • Kim, Miran;Gang, Hyeon-Ju;Ro, Juhyoun
    • Korean Linguistics
    • /
    • v.63
    • /
    • pp.103-132
    • /
    • 2014
  • This study compares various speech rates of Korean learners with those of native Korean. Speech data were collected from 34 native Koreans and 33 Korean learners (19 Chinese and 14 Japanese). Each participant recorded a 9 syllabled Korean sentence at three different speech rate types. A total of 603 speech samples were analyzed by speech rate types (normal, slow, and fast), native languages (Korean, Chinese, Japanese), and learners' proficiency levels (beginner, intermediate, and advanced). We found that learners' L1 background plays a role in categorizing different speech rates in the L2 (Korean), and also that the leaners' proficiency correlates with the increase of speaking rate regardless of speech rate categories. More importantly, faster speech rate values found in the advanced level of learners do not necessarily match to the native speakers' speech rate categories. This means that learning speech rate categories can be more complex than we think of proficiency or fluency. That is, speech rate categories may not be acquired automatically during the course of second language learning, and implicit or explicit exposures to various rate types are necessary for second language learners to acquire a high level of communicative skills including speech rate variation. This paper discusses several pedagogical implications in terms of teaching pronunciation to second language learners.

A Preliminary Study on Correlation between Voice Characteristics and Speech Features (목소리 특성의 주관적 평가와 음성 특징과의 상관관계 기초연구)

  • Han, Sung-Man;Kim, Sang-Beom;Kim, Jong-Yeol;Kwon, Chul-Hong
    • Phonetics and Speech Sciences
    • /
    • v.3 no.4
    • /
    • pp.85-91
    • /
    • 2011
  • Sasang constitution medicine utilizes voice characteristics to diagnose a person's constitution. To classify Sasang constitutional groups using speech information technology, this study aims at establishing the relationship between Sasang constitutional groups and their corresponding voice characteristics by investigating various speech feature variables. The speech variables include features related to speech source and vocal tract filter. Experimental results show that statistically significant correlation between voice characteristics and some speech feature variables is observed.

  • PDF

Speech Enhancement Using Lip Information and SFM (입술정보 및 SFM을 이용한 음성의 음질향상알고리듬)

  • Baek, Seong-Joon;Kim, Jin-Young
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.77-84
    • /
    • 2003
  • In this research, we seek the beginning of the speech and detect the stationary speech region using lip information. Performing running average of the estimated speech signal in the stationary region, we reduce the effect of musical noise which is inherent to the conventional MlMSE (Minimum Mean Square Error) speech enhancement algorithm. In addition to it, SFM (Spectral Flatness Measure) is incorporated to reduce the speech signal estimation error due to speaking habit and some lacking lip information. The proposed algorithm with Wiener filtering shows the superior performance to the conventional methods according to MOS (Mean Opinion Score) test.

  • PDF