• Title/Summary/Keyword: Non-speech

Search Result 470, Processing Time 0.03 seconds

The Contribution of Prosody to the Foreign Accent of Chinese Talkers' English Speech

  • Liu, Xing;Lee, Joo-Kyeong
    • Phonetics and Speech Sciences
    • /
    • v.4 no.3
    • /
    • pp.59-73
    • /
    • 2012
  • This study attempts to investigate the contribution of prosody to the foreign accent in Chinese speakers' English production by examining the synthesized speech of crossing native and non-native talkers' prosody and segments. For the stimuli of the foreign accent ratings, we transplanted gender-matched native speakers' prosody onto non-native talkers' segments and vice versa, utilizing the TD-PSOLA algorithm. Eight English native listeners participated in judging foreign accent and comprehensibility of the transplanted stimuli. Results showed that the synthesized stimuli were perceived as stronger foreign accent regardless of speakers' proficiency when English speakers' prosody was crossed with Chinese speakers' segments. This suggests that segments contribute more than prosody to native listeners' evaluation of foreign accent. When transplanted with English speakers' segments, Chinese speakers' prosody showed a difference in duration rather than pitch between high and low proficiency such that stronger foreign accent was detected when low proficient Chinese speakers' duration was crossed with English speakers' segments. This indicated that prosody, more specifically duration, plays a role though the prosodic role is not overall as significant as segments. According to the post acoustic analysis, the temporal features contributing to making the duration parameter prominent as opposed to pitch were found out to be speaking rate, pause duration and pause frequency. Finally, foreign accent and comprehensibility showed no significant correlation such that native listeners had no difficulty listening to highly foreign accented speech.

Target Speech Segregation Using Non-parametric Correlation Feature Extraction in CASA System (CASA 시스템의 비모수적 상관 특징 추출을 이용한 목적 음성 분리)

  • Choi, Tae-Woong;Kim, Soon-Hyub
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.1
    • /
    • pp.79-85
    • /
    • 2013
  • Feature extraction of CASA system uses time continuity and channel similarity and makes correlogram of auditory elements for the use. In case of using feature extraction with cross correlation coefficient for channel similarity, it has much computational complexity in order to display correlation quantitatively. Therefore, this paper suggests feature extraction method using non-parametric correlation coefficient in order to reduce computational complexity when extracting the feature and tests to segregate target speech by CASA system. As a result of measuring SNR (Signal to Noise Ratio) for the performance evaluation of target speech segregation, the proposed method shows a slight improvement of 0.14 dB on average over the conventional method.

The interlanguage Speech Intelligibility Benefit for Korean Learners of English: Production of English Front Vowels

  • Han, Jeong-Im;Choi, Tae-Hwan;Lim, In-Jae;Lee, Joo-Kyeong
    • Phonetics and Speech Sciences
    • /
    • v.3 no.2
    • /
    • pp.53-61
    • /
    • 2011
  • The present work is a follow-up study to that of Han, Choi, Lim and Lee (2011), where an asymmetry in the source segments eliciting the interlanguage speech intelligibility benefit (ISIB) was found such that the vowels which did not match any vowel of the Korean language were likely to elicit more ISIB than matched vowels. In order to identify the source of the stronger ISIB in non-matched vowels, acoustic analyses of the stimuli were performed. Two pairs of English front vowels [i] vs. [I], and $[{\varepsilon}]$ vs. $[{\ae}]$ were recorded by English native talkers and two groups of Korean learners according to their English proficiency, and then their vowel duration and the frequencies of the first two formants (F1, F2) were measured. The results demonstrated that the non-matched vowels such as [I], and $[{\ae}]$ produced by Korean talkers seemed to show more deviated acoustic characteristics from those of the natives, with longer duration and with closer formant values to the matched vowels, [i] and $[{\varepsilon}]$, than those of the English natives. Combining the results of acoustic measurements in the present study and those of word identification in Han et al. (2011), we suggest that relatively better performance in word identification by Korean talkers/listeners than the native English talkers/listeners is associated with the shared interlanguage of Korean talkers and listeners.

  • PDF

Speech Recognition in Noisy Environments using the NOise Spectrum Estimation based on the Histogram Technique (히스토그램 처리방법에 의한 잡음 스펙트럼 추정을 이용한 잡음환경에서의 음성인식)

  • Kwon, Young-Uk;Kim, Hyung-Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.5
    • /
    • pp.68-75
    • /
    • 1997
  • Spectral subtraction is widely-used preprocessing technique for speech recognition in additive noise environments, but it requires a good estimate of the noise power spectrum. In this paper, we employ the histogram technique for the estimation of noise spectrum. This technique has advantages over other noise estimation methods in that it does not requires speech/non-speech detection and can estimate slowly-varying noise spectra. According to the speaker-independent isolated word recognition in both colored Gaussian and car noise environments under various SNR conditions. Histogram-technique-based spectral subtraction method yields superier performance to the one with conventional noise estimation method using the spectral average of initial frames during non-speech period.

  • PDF

Design of Speech Enhancement U-Net for Embedded Computing (임베디드 연산을 위한 잡음에서 음성추출 U-Net 설계)

  • Kim, Hyun-Don
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.15 no.5
    • /
    • pp.227-234
    • /
    • 2020
  • In this paper, we propose wav-U-Net to improve speech enhancement in heavy noisy environments, and it has implemented three principal techniques. First, as input data, we use 128 modified Mel-scale filter banks which can reduce computational burden instead of 512 frequency bins. Mel-scale aims to mimic the non-linear human ear perception of sound by being more discriminative at lower frequencies and less discriminative at higher frequencies. Therefore, Mel-scale is the suitable feature considering both performance and computing power because our proposed network focuses on speech signals. Second, we add a simple ResNet as pre-processing that helps our proposed network make estimated speech signals clear and suppress high-frequency noises. Finally, the proposed U-Net model shows significant performance regardless of the kinds of noise. Especially, despite using a single channel, we confirmed that it can well deal with non-stationary noises whose frequency properties are dynamically changed, and it is possible to estimate speech signals from noisy speech signals even in extremely noisy environments where noises are much lauder than speech (less than SNR 0dB). The performance on our proposed wav-U-Net was improved by about 200% on SDR and 460% on NSDR compared to the conventional Jansson's wav-U-Net. Also, it was confirmed that the processing time of out wav-U-Net with 128 modified Mel-scale filter banks was about 2.7 times faster than the common wav-U-Net with 512 frequency bins as input values.

Would Wernicke's Aphasic Speech Vary with Auditory Comprehension Abilities and/or Lesion Loci?

  • Kim, Hyang-Hee;Lee, Young-Mi;Na, Duk-L.;Chung, Chin-Sang;Lee, Kwang-Ho
    • Speech Sciences
    • /
    • v.13 no.1
    • /
    • pp.69-83
    • /
    • 2006
  • Speech characteristics of Wernicke's aphasia are characterized by such errors as empty speech, jargon, paraphasia, filler and others. However, not all the errors can be observed in each patient presumably due to diverse auditory comprehension (AC) abilities and/or lesion loci. The purpose of this study was, thus, to clarify the speech characteristics of Wernicke's aphasics according to the AC levels (i.e., better vs. worse) and lesion loci (i.e., Wernicke's area, WA vs. non-Wernicke's area, NWA). The authors divided 21 Wernicke's aphasic patients into four patient groups based on their AC levels and the lesion loci. The results showed that the four groups differed only in CIU (Correct Information Unit) rate. The patient groups with a better AC ability had higher CIU rates than the groups with a worse AC regardless of the lesion loci (e.g., WA or NWA). Therefore, it was concluded that CIU rate, the differentiating speech variable was most likely related to the AC levels, but not to lesion loci.

  • PDF

Effects of Prosodic Strengthening on the Production of English High Front Vowels /i, ɪ/ by Native vs. Non-Native Speakers (원어민과 비원어민의 영어 전설 고모음 /i, ɪ/ 발화에 나타나는 운율 강화 현상)

  • Kim, Sahyang;Hur, Yuna;Cho, Taehong
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.129-136
    • /
    • 2013
  • This study investigated how acoustic characteristics (i.e., duration, F1, F2) of English high front vowels /i, ɪ/ are modulated by boundary- and prominence-induced strengthening in native vs. non-native (Korean) speech production. The study also examined how the durational difference in vowels due to the voicing of a following consonant (i.e., voiced vs. voiceless) is modified by prosodic strengthening in two different (native vs. non-native) speaker groups. Five native speakers of Canadian English and eight Korean learners of English (intermediate-advanced level) produced 8 minimal pairs with the CVC sequence (e.g., 'beat'-'bit') in varying prosodic contexts. Native speakers distinguished the two vowels in terms of duration, F1, and F2, whereas non-native speakers only showed durational differences. The two groups were similar in that they maximally distinguished the two vowels when the vowels were accented (F2, duration), while neither group showed boundary-induced strengthening in any of the three measurements. The durational differences due to the voicing of the following consonant were also maximized when accented. The results are discussed further in terms of phonetics-prosody interface in L2 production.

A Robust Speech/Non-Speech Decision Using Voiced Characteristics of Speech (음성의 유성음 특성을 이용한 음성/비음성 판별 방법)

  • Lee, Sung-Joo;Jung, Ho-Young;Lee, Yun-Keun;Kim, Hyung-Soon
    • Annual Conference of KIPS
    • /
    • 2007.05a
    • /
    • pp.411-412
    • /
    • 2007
  • 자동음성인식 시스템을 이용하는 사용자 입장에서 보면 음성인식시스템을 사용하기 위하여 음성을 입력할 때마다 버튼을 눌러야 하는 Push-To-Talk (PTT) 방식은 여간 번거로운 일이 아닐 수 없다. 그리고 사용자가 원거리에서 음성을 입력하는 경우처럼 PTT 방식 자체가 용이하지 못 한 음성인식 응용분야에서는 Non-Push-To-Talk (NON-PTT) 방식의 필요성이 대두되게 된다. NON-PTT 방식의 음성 전처리를 위해서는 입력신호로부터 음성신호만을 구분해내는 음성판별기술이 필수적이다. 하지만 일상적인 잡음환경에서 음성신호만을 구분해내는 일은 매우 어려운 일이 아닐 수 없다. 본 논문에서는 일상적인 가정잡음환경에 강인한 음성판별방식을 제안한다. 여기서는 음성판별을 위해서 음성의 유성음 특성을 이용하였다. 즉, 일정구간 이상의 음성신호에는 일정구간이상의 유성음 구간이 존재하며 만약 잡음환경에서도 유성음 구간을 잘 검출할 수 있다면 이러한 음성의 특성을 이용하여 검출된 신호가 음성인지 아닌지를 판별할 수 있다. 이를 위하여 여기서는 가정잡음환경에서도 유성음을 잘 검출할 수 있도록 11 가지 유성음 특징들과 이를 이용한 음성판별방법을 제안하였다. 제안된 방법의 성능 평가를 위하여 음성의 끝점검출방법과 통합하여 음성/비음성 판별 테스트를 수행하였으며 테스트 수행결과 열악한 잡음환경에서 80%이상의 비음성을 거절하는 성능을 보였다.

State Encoding of Hidden Markov Linear Prediction Models

  • Krishnamurthy, Vikram;Poor, H.Vincent
    • Journal of Communications and Networks
    • /
    • v.1 no.3
    • /
    • pp.153-157
    • /
    • 1999
  • In this paper, we derive finite-dimensional non-linear fil-ters for optimally reconstructing speech signals in Switched Predic-tion vocoders, Code Excited Linear Prediction(CELP) and Differ-ential Pulse Code Modulation (DPCM). Our filter is an extension of the Hidden Markov filter.

  • PDF

Relationship between Speech Perception in Noise and Phonemic Restoration of Speech in Noise in Individuals with Normal Hearing

  • Vijayasarathy, Srikar;Barman, Animesh
    • Journal of Audiology & Otology
    • /
    • v.24 no.4
    • /
    • pp.167-173
    • /
    • 2020
  • Background and Objectives: Top-down restoration of distorted speech, tapped as phonemic restoration of speech in noise, maybe a useful tool to understand robustness of perception in adverse listening situations. However, the relationship between phonemic restoration and speech perception in noise is not empirically clear. Subjects and Methods: 20 adults (40-55 years) with normal audiometric findings were part of the study. Sentence perception in noise performance was studied with various signal-to-noise ratios (SNRs) to estimate the SNR with 50% score. Performance was also measured for sentences interrupted with silence and for those interrupted by speech noise at -10, -5, 0, and 5 dB SNRs. The performance score in the noise interruption condition was subtracted by quiet interruption condition to determine the phonemic restoration magnitude. Results: Fairly robust improvements in speech intelligibility was found when the sentences were interrupted with speech noise instead of silence. Improvement with increasing noise levels was non-monotonic and reached a maximum at -10 dB SNR. Significant correlation between speech perception in noise performance and phonemic restoration of sentences interrupted with -10 dB SNR speech noise was found. Conclusions: It is possible that perception of speech in noise is associated with top-down processing of speech, tapped as phonemic restoration of interrupted speech. More research with a larger sample size is indicated since the restoration is affected by the type of speech material and noise used, age, working memory, and linguistic proficiency, and has a large individual variability.