• Title/Summary/Keyword: Non-speech

Search Result 468, Processing Time 0.028 seconds

Lexical Status and the Degree of /l/-darkening

  • Ahn, Miyeon
    • Phonetics and Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.73-78
    • /
    • 2015
  • This study explores the degree of velarization of English word-final /l/ (i.e., /l/-darkness) according to the lexical status. Lexical status is defined as whether a speech stimulus is considered as a word or a non-word. We examined the temporal and spectral properties of word-final /l/ in terms of the duration and the frequency difference of F2-F1 values by varying the immediate pre-liquid vowels. The result showed that both temporal and spectral properties were contrastive across all vowel contexts in the way of real words having shorter [l] duration and low F2-F1 values, compared to non-words. That is, /l/ is more heavily velarized in words than in non-words, which suggests that lexical status whether language users encode the speech signal as a word or not is deeply involved in their speech production.

Phonological Process and Word Recognition in Continuous Speech: Evidence from Coda-neutralization (음운 현상과 연속 발화에서의 단어 인지 - 종성중화 작용을 중심으로)

  • Kim, Sun-Mi;Nam, Ki-Chun
    • Phonetics and Speech Sciences
    • /
    • v.2 no.2
    • /
    • pp.17-25
    • /
    • 2010
  • This study explores whether Koreans exploit their native coda-neutralization process when recognizing words in Korean continuous speech. According to the phonological rules in Korean, coda-neutralization process must come before the liaison process, as long as the latter(i.e. liaison process) occurs between 'words', which results in liaison-consonants being coda-neutralized ones such as /b/, /d/, or /g/, rather than non-neutralized ones like /p/, /t/, /k/, /ʧ/, /ʤ/, or /s/. Consequently, if Korean listeners use their native coda-neutralization rules when processing speech input, word recognition will be hampered when non-neutralized consonants precede vowel-initial targets. Word-spotting and word-monitoring tasks were conducted in Experiment 1 and 2, respectively. In both experiments, listeners recognized words faster and more accurately when vowel-initial target words were preceded by coda-neutralized consonants than when preceded by coda non-neutralized ones. The results show that Korean listeners exploit the coda-neutralization process when processing their native spoken language.

  • PDF

A comparison of phonological error patterns in the single word and spontaneous speech of children with speech sound disorders (말소리장애 아동의 단어와 자발화 문맥의 음운오류패턴 비교)

  • Park, kayeon;Kim, Soo-Jin
    • Phonetics and Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.165-173
    • /
    • 2015
  • This study was aim to compare the phonological error patterns and PCC(Percentage of Correct Consonants) derived from the single word and spontaneous speech contexts of the speech sound disorders with unknown origin(SSD). The present study suggest that the development phonological error patterns and non-developmental error patterns of the target children, in according to speech context. The subjects were 15 children with SSD up to the age of 5 from 3 years of age. This research use 37 words of APAC(Assessment of Phonology & Articulation for Children) in the single word context and 100 eojeol in the spontaneous speech context. There was no difference of PCC between the single word and the spontaneous speech contexts. Significantly different developmental phonological error patterns between the single word and the spontaneous speech contexts were syllable deletion, word-medial onset deletion, liquid deletion, gliding, affrication, fricative other error, tensing, regressive assimilation. Significantly different non-developmental phonological error patterns were backing, addtion of phoneme, aspirating. The study showed that there was no difference of PCC between elicited single word and spontaneous conversational context. And there were some different phonological error patterns derived from the two contexts of the speech sound disorders. The more important interventions target is the error patterns of the spontaneous speech contexts for the immediate generalization and rising overall intelligibility.

Optimizing Multiple Pronunciation Dictionary Based on a Confusability Measure for Non-native Speech Recognition (타언어권 화자 음성 인식을 위한 혼잡도에 기반한 다중발음사전의 최적화 기법)

  • Kim, Min-A;Oh, Yoo-Rhee;Kim, Hong-Kook;Lee, Yeon-Woo;Cho, Sung-Eui;Lee, Seong-Ro
    • MALSORI
    • /
    • no.65
    • /
    • pp.93-103
    • /
    • 2008
  • In this paper, we propose a method for optimizing a multiple pronunciation dictionary used for modeling pronunciation variations of non-native speech. The proposed method removes some confusable pronunciation variants in the dictionary, resulting in a reduced dictionary size and less decoding time for automatic speech recognition (ASR). To this end, a confusability measure is first defined based on the Levenshtein distance between two different pronunciation variants. Then, the number of phonemes for each pronunciation variant is incorporated into the confusability measure to compensate for ASR errors due to words of a shorter length. We investigate the effect of the proposed method on ASR performance, where Korean is selected as the target language and Korean utterances spoken by Chinese native speakers are considered as non-native speech. It is shown from the experiments that an ASR system using the multiple pronunciation dictionary optimized by the proposed method can provide a relative average word error rate reduction of 6.25%, with 11.67% less ASR decoding time, as compared with that using a multiple pronunciation dictionary without the optimization.

  • PDF

Feature Parameter Extraction and Speech Recognition Using Matrix Factorization (Matrix Factorization을 이용한 음성 특징 파라미터 추출 및 인식)

  • Lee Kwang-Seok;Hur Kang-In
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.7
    • /
    • pp.1307-1311
    • /
    • 2006
  • In this paper, we propose new speech feature parameter using the Matrix Factorization for appearance part-based features of speech spectrum. The proposed parameter represents effective dimensional reduced data from multi-dimensional feature data through matrix factorization procedure under all of the matrix elements are the non-negative constraint. Reduced feature data presents p art-based features of input data. We verify about usefulness of NMF(Non-Negative Matrix Factorization) algorithm for speech feature extraction applying feature parameter that is got using NMF in Mel-scaled filter bank output. According to recognition experiment results, we confirm that proposed feature parameter is superior to MFCC(Mel-Frequency Cepstral Coefficient) in recognition performance that is used generally.

The Effects of Korean Coda-neutralization Process on Word Recognition in English (한국어의 종성중화 작용이 영어 단어 인지에 미치는 영향)

  • Kim, Sun-Mi;Nam, Ki-Chun
    • Phonetics and Speech Sciences
    • /
    • v.2 no.1
    • /
    • pp.59-68
    • /
    • 2010
  • This study addresses the issue of whether Korean(L1)-English(L2) non-proficient bilinguals are affected by the native coda-neutralization process when recognizing words in English continuous speech. Korean phonological rules require that if liaison occurs between 'words', then coda-neutralization process must come before the liaison process, which results in liaison-consonants being coda-neutralized ones such as /b/, /d/, or /g/, rather than non-neutralized ones like /p/, /t/, /k/, /$t{\int}$/, /$d_{\Im}$/, or /s/. Consequently, if Korean listeners apply their native coda-neutralization rules to English speech input, word detection will be easier when coda-neutralized consonants precede target words than when non-neutralized ones do. Word-spotting and word-monitoring tasks were used in Experiment 1 and 2, respectively. In both experiments, listeners detected words faster and more accurately when vowel-initial target words were preceded by coda-neutralized consonants than when preceded by coda non-neutralized ones. The results show that Korean listeners exploit their native phonological process when processing English, irrespective of whether the native process is appropriate or not.

  • PDF

The Interlanguage Speech Intelligibility Benefit for Listeners (ISIB-L): The Case of English Liquids

  • Lee, Joo-Kyeong;Xue, Xiaojiao
    • Phonetics and Speech Sciences
    • /
    • v.3 no.1
    • /
    • pp.51-65
    • /
    • 2011
  • This study attempts to investigate the interlanguage speech intelligibility benefit for listeners (ISIB-L), examining Chinese talkers' production of English liquids and its perception of native listeners and non-native Chinese and Korean listeners. An Accent Judgment Task was conducted to measure non-native talkers' and listeners' phonological proficiency, and two levels of proficiency groups (high and low) participated in the experiment. The English liquids /l/ and /r/ produced by Chinese talkers were considered in terms of positions (syllable initial and final), contexts (segment, word and sentence) and lexical density (minimal vs. nonminimal pair) to see if these factors play a role in ISIIB-L. Results showed that both matched and mismatched interlanguage speech intelligibility benefit for listeners occurred except for the initial /l/. Non-native Chinese and Korean listeners, though only with high proficiency, were more accurate at identifying initial /r/, final /l/ and final /r/, but initial /l/ was significantly more intelligible to native listeners than non-native listeners. There was evidence of contextual and lexical density effects on ISIB-L. No ISIB-L was demonstrated in sentence context, but both matched and mismatched ISIB-L was observed in word context; this finding held true for only high proficiency listeners. Listeners recognized the targets better in the non-minimal pair (sparse density) environment than the minimal pair (higher density) environment. These findings suggest that ISIB-L for English liquids is influenced by talkers' and listeners' proficiency, syllable position in association with L1 and L2 phonological structure, context, and word neighborhood density.

  • PDF

Classification of Pathological Voice from ARS using Neural Network (신경회로망을 이용한 ARS 장애음성의 식별에 관한 연구)

  • Jo, C.W.;Kim, K.I.;Kim, D.H.;Kwon, S.B.;Kim, K.R.;Kim, Y.J.;Jun, K.R.;Wang, S.G.
    • Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.61-71
    • /
    • 2001
  • Speech material, which is collected from ARS(Automatic Response System), was analyzed and classified into disease and non-disease state. The material include 11 different kinds of diseases. Along with ARS speech, DAT(Digital Audio Tape) speech is collected in parallel to give the bench mark. To analyze speech material, analysis tools, which is developed local laboratory, are used to provide an improved and robust performance to the obtained parameters. To classify speech into disease and non-disease class, multi-layered neural network was used. Three different combinations of 3, 6, 12 parameters are tested to obtain the proper network size and to find the best performance. From the experiment, the classification rate of 92.5% was obtained.

  • PDF