• Title/Summary/Keyword: speech features

Search Result 647, Processing Time 0.024 seconds

Analysis of the Timing of Spoken Korean Using a Classification and Regression Tree (CART) Model

  • Chung, Hyun-Song;Huckvale, Mark
    • Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.77-91
    • /
    • 2001
  • This paper investigates the timing of Korean spoken in a news-reading speech style in order to improve the naturalness of durations used in Korean speech synthesis. Each segment in a corpus of 671 read sentences was annotated with 69 segmental and prosodic features so that the measured duration could be correlated with the context in which it occurred. A CART model based on the features showed a correlation coefficient of 0.79 with an RMSE (root mean squared prediction error) of 23 ms between actual and predicted durations in reserved test data. These results are comparable with recent published results in Korean and similar to results found in other languages. An analysis of the classification tree shows that phrasal structure has the greatest effect on the segment duration, followed by syllable structure and the manner features of surrounding segments. The place features of surrounding segments only have small effects. The model has application in Korean speech synthesis systems.

  • PDF

An Optimality Theoretic Approach to the Feature Model for Speech Understanding

  • Kim, Kee-Ho
    • Speech Sciences
    • /
    • v.2
    • /
    • pp.109-124
    • /
    • 1997
  • This paper shows how a distinctive feature model can effectively be implemented into speech understanding within the framework of the Optimality Theory(OT); i.e., to show how distinctive features can optimally be extracted from given speech signals, and how segments can be chosen as the optimal ones among plausible candidates. This paper will also show how the sequence of segments can successfully be matched with optimal words in a lexicon.

  • PDF

Speech Developmental Link between Intelligibility and Phonemic Contrasts, and Acoustic Features in Putonghua-Speaking Children (표준 중국어의 구어 명료도와 음소 대조 및 음향 자질의 발달적 상관관계)

  • Han, Ji-Yeon
    • MALSORI
    • /
    • no.59
    • /
    • pp.1-11
    • /
    • 2006
  • This study was designed to investigate the relationship between intelligibility and phonemic contrasts, and acoustic features in terms of speech development. A total of 212 Putonghua speaking children was participated in the experiment. There were phonemic contrasts significantly related with speech intelligibility: aspirated vs. fricative, retroflex vs. unretroflex, and front vs. back nasal vowel contrast. A regression analysis showed that 88% of the speech intelligibility could be predicted by these phonemic contrasts. Acoustic values were significantly related to the intelligibility of the Putonghua-speaking children's speech: voice onset time of unaspirated stops, and the duration of frication noise in fricatives.

  • PDF

An SVM-based physical fatigue diagnostic model using speech features (음성 특징 파라미터를 이용한 SVM 기반 육체피로도 진단모델)

  • Kim, Tae Hun;Kwon, Chul Hong
    • Phonetics and Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.17-22
    • /
    • 2016
  • This paper devises a model to diagnose physical fatigue using speech features. This paper presents a machine learning method through an SVM algorithm using the various feature parameters. The parameters used include the significant speech parameters, questionnaire responses, and bio-signal parameters obtained before and after the experiment imposing the fatigue. The results showed that performance rates of 95%, 100%, and 90%, respectively, were observed from the proposed model using three types of the parameters relevant to the fatigue. These results suggest that the method proposed in this study can be used as the physical fatigue diagnostic model, and that fatigue can be easily diagnosed by speech technology.

Measuring Correlation between Mental Fatigues and Speech Features (정신피로와 음성특징과의 상관관계 측정)

  • Kim, Jungin;Kwon, Chulhong
    • Phonetics and Speech Sciences
    • /
    • v.6 no.2
    • /
    • pp.3-8
    • /
    • 2014
  • This paper deals with how mental fatigue has an effect on human voice. For this a monotonous task to increase the feeling of the fatigue and a set of subjective questionnaire for rating the fatigue were designed. From the experiments the designed task was proven to be monotonous based on the results of the questionnaire responses. To investigate a statistical relationship between speech features extracted from the collected speech data and fatigue, the T test for two-related-samples was used. Statistical analysis shows that speech parameters deeply related to the fatigue are the first formant bandwidth, Jitter, H1-H2, cepstral peak prominence, and harmonics-to-noise ratio. According to the experimental results, it can be seen that voice is changed to be breathy as mental fatigue proceeds.

Investigating an Automatic Method for Summarizing and Presenting a Video Speech Using Acoustic Features (음향학적 자질을 활용한 비디오 스피치 요약의 자동 추출과 표현에 관한 연구)

  • Kim, Hyun-Hee
    • Journal of the Korean Society for information Management
    • /
    • v.29 no.4
    • /
    • pp.191-208
    • /
    • 2012
  • Two fundamental aspects of speech summary generation are the extraction of key speech content and the style of presentation of the extracted speech synopses. We first investigated whether acoustic features (speaking rate, pitch pattern, and intensity) are equally important and, if not, which one can be effectively modeled to compute the significance of segments for lecture summarization. As a result, we found that the intensity (that is, difference between max DB and min DB) is the most efficient factor for speech summarization. We evaluated the intensity-based method of using the difference between max-DB and min-DB by comparing it to the keyword-based method in terms of which method produces better speech summaries and of how similar weight values assigned to segments by two methods are. Then, we investigated the way to present speech summaries to the viewers. As such, for speech summarization, we suggested how to extract key segments from a speech video efficiently using acoustic features and then present the extracted segments to the viewers.

A Simple Speech/Non-speech Classifier Using Adaptive Boosting

  • Kwon, Oh-Wook;Lee, Te-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.3E
    • /
    • pp.124-132
    • /
    • 2003
  • We propose a new method for speech/non-speech classifiers based on concepts of the adaptive boosting (AdaBoost) algorithm in order to detect speech for robust speech recognition. The method uses a combination of simple base classifiers through the AdaBoost algorithm and a set of optimized speech features combined with spectral subtraction. The key benefits of this method are the simple implementation, low computational complexity and the avoidance of the over-fitting problem. We checked the validity of the method by comparing its performance with the speech/non-speech classifier used in a standard voice activity detector. For speech recognition purpose, additional performance improvements were achieved by the adoption of new features including speech band energies and MFCC-based spectral distortion. For the same false alarm rate, the method reduced 20-50% of miss errors.

Automatic severity classification of dysarthria using voice quality, prosody, and pronunciation features (음질, 운율, 발음 특징을 이용한 마비말장애 중증도 자동 분류)

  • Yeo, Eun Jung;Kim, Sunhee;Chung, Minhwa
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.57-66
    • /
    • 2021
  • This study focuses on the issue of automatic severity classification of dysarthric speakers based on speech intelligibility. Speech intelligibility is a complex measure that is affected by the features of multiple speech dimensions. However, most previous studies are restricted to using features from a single speech dimension. To effectively capture the characteristics of the speech disorder, we extracted features of multiple speech dimensions: voice quality, prosody, and pronunciation. Voice quality consists of jitter, shimmer, Harmonic to Noise Ratio (HNR), number of voice breaks, and degree of voice breaks. Prosody includes speech rate (total duration, speech duration, speaking rate, articulation rate), pitch (F0 mean/std/min/max/med/25quartile/75 quartile), and rhythm (%V, deltas, Varcos, rPVIs, nPVIs). Pronunciation contains Percentage of Correct Phonemes (Percentage of Correct Consonants/Vowels/Total phonemes) and degree of vowel distortion (Vowel Space Area, Formant Centralized Ratio, Vowel Articulatory Index, F2-Ratio). Experiments were conducted using various feature combinations. The experimental results indicate that using features from all three speech dimensions gives the best result, with a 80.15 F1-score, compared to using features from just one or two speech dimensions. The result implies voice quality, prosody, and pronunciation features should all be considered in automatic severity classification of dysarthria.

Perceptual Structure of Korean Consonants in High Vowel Contexts (고설 모음 환경에서 한국어 자음의 지각적 구조)

  • Bae, Moon-Jung
    • Phonetics and Speech Sciences
    • /
    • v.1 no.2
    • /
    • pp.95-103
    • /
    • 2009
  • We investigated the perceptual structure of Korean consonants by analyzing the confusion among consonants in various vowel contexts. The 36 CV syllable types combined by 18 consonants and 2 vowels (/i/ and /u/) were presented with masking noises or in degraded intensity. The confusion data were analyzed by the INDSCAL (Individual Difference Scaling), ADCLUS (Additive Clustering) and the probability of the transmitted information. The results were compared with those of a previous study with /a/ vowel context (Bae and Kim, 2002). The overall results showed that the laryngeal features-aspiration, lax and tense-are the most salient features in the perception of Korean consonant regardless of vowel contexts, but the perceptual saliency of place features varies across vowel conditions. In high vowel (front and back vowel) contexts, sibilant consonants were perceptually salient compared to in low vowel contexts. In back vowel contexts, grave (labial and velar) consonants were perceptually salient. These findings imply that place features and vowel features strongly interact in speech perception as well as in speech production. All statistical measures from our confusion data ensured that the perceptual structure of Korean consonants correspond to the hierarchical structure suggested in the feature geometry (Clements, 1991). We discuss the link between speech perception and production as the basis of phonology.

  • PDF

Statistical Extraction of Speech Features Using Independent Component Analysis and Its Application to Speaker Identification

  • Jang, Gil-Jin;Oh, Yung-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4E
    • /
    • pp.156-163
    • /
    • 2002
  • We apply independent component analysis (ICA) for extracting an optimal basis to the problem of finding efficient features for representing speech signals of a given speaker The speech segments are assumed to be generated by a linear combination of the basis functions, thus the distribution of speech segments of a speaker is modeled by adapting the basis functions so that each source component is statistically independent. The learned basis functions are oriented and localized in both space and frequency, bearing a resemblance to Gabor wavelets. These features are speaker dependent characteristics and to assess their efficiency we performed speaker identification experiments and compared our results with the conventional Fourier-basis. Our results show that the proposed method is more efficient than the conventional Fourier-based features in that they can obtain a higher speaker identification rate.