• Title/Summary/Keyword: Speech Interference

Search Result 67, Processing Time 0.018 seconds

Interference Suppression Using Principal Subspace Modification in Multichannel Wiener Filter and Its Application to Speech Recognition

  • Kim, Gi-Bak
    • ETRI Journal
    • /
    • v.32 no.6
    • /
    • pp.921-931
    • /
    • 2010
  • It has been shown that the principal subspace-based multichannel Wiener filter (MWF) provides better performance than the conventional MWF for suppressing interference in the case of a single target source. It can efficiently estimate the target speech component in the principal subspace which estimates the acoustic transfer function up to a scaling factor. However, as the input signal-to-interference ratio (SIR) becomes lower, larger errors are incurred in the estimation of the acoustic transfer function by the principal subspace method, degrading the performance in interference suppression. In order to alleviate this problem, a principal subspace modification method was proposed in previous work. The principal subspace modification reduces the estimation error of the acoustic transfer function vector at low SIRs. In this work, a frequency-band dependent interpolation technique is further employed for the principal subspace modification. The speech recognition test is also conducted using the Sphinx-4 system and demonstrates the practical usefulness of the proposed method as a front processing for the speech recognizer in a distant-talking and interferer-present environment.

The Locus of the Word Frequency Effect in Speech Production: Evidence from the Picture-word Interference Task (말소리 산출에서 단어빈도효과의 위치 : 그림-단어간섭과제에서 나온 증거)

  • Koo, Min-Mo;Nam, Ki-Chun
    • MALSORI
    • /
    • no.62
    • /
    • pp.51-68
    • /
    • 2007
  • Two experiments were conducted to determine the exact locus of the frequency effect in speech production. Experiment 1 addressed the question as to whether the word frequency effect arise from the stage of lemma selection. A picture-word interference task was performed to test the significance of interactions between the effects of target frequency, distractor frequency and semantic relatedness. There was a significant interaction between the distractor frequency and the semantic relatedness and between the target and the distractor frequency. Experiment 2 examined whether the word frequency effect is attributed to the lexeme level which represent phonological information of words. A methodological logic applied to Experiment 2 was the same as that of Experiment 1. There was no significant interaction between the distractor frequency and the phonological relatedness. These results demonstrate that word frequency has influence on the processes involved in selecting a correct lemma corresponding to an activated lexical concept in speech production.

  • PDF

Nonlinear Interaction between Consonant and Vowel Features in Korean Syllable Perception (한국어 단음절에서 자음과 모음 자질의 비선형적 지각)

  • Bae, Moon-Jung
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.29-38
    • /
    • 2009
  • This study investigated the interaction between consonants and vowels in Korean syllable perception using a speeded classification task (Garner, 1978). Experiment 1 examined whether listeners analytically perceive the component phonemes in CV monosyllables when classification is based on the component phonemes (a consonant or a vowel) and observed a significant redundancy gain and a Garner interference effect. These results imply that the perception of the component phonemes in a CV syllable is not linear. Experiment 2 examined the further relation between consonants and vowels at a subphonemic level comparing classification times based on glottal features (aspiration and lax), on place of articulation features (labial and coronal), and on vowel features (front and back). Across all feature classifications, there were significant but asymmetric interference effects. Glottal feature.based classification showed the least amount of interference effect, while vowel feature.based classification showed moderate interference, and place of articulation feature-based classification showed the most interference. These results show that glottal features are more independent to vowels, but place features are more dependent to vowels in syllable perception. To examine the three-way interaction among glottal, place of articulation, and vowel features, Experiment 3 featured a modified Garner task. The outcome of this experiment indicated that glottal consonant features are independent to both the place of articulation and vowel features, but the place of articulation features are dependent to glottal and vowel features. These results were interpreted to show that speech perception is not abstract and discrete, but nonlinear, and that the perception of features corresponds to the hierarchical organization of articulatory features which is suggested in nonlinear phonology (Clements, 1991; Browman and Goldstein, 1989).

  • PDF

Single-Channel Non-Causal Speech Enhancement to Suppress Reverberation and Background Noise

  • Song, Myung-Suk;Kang, Hong-Goo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.8
    • /
    • pp.487-506
    • /
    • 2012
  • This paper proposes a speech enhancement algorithm to improve the speech intelligibility by suppressing both reverberation and background noise. The algorithm adopts a non-causal single-channel minimum variance distortionless response (MVDR) filter to exploit an additional information that is included in the noisy-reverberant signals in subsequent frames. The noisy-reverberant signals are decomposed into the parts of the desired signal and the interference that is not correlated to the desired signal. Then, the filter equation is derived based on the MVDR criterion to minimize the residual interference without bringing speech distortion. The estimation of the correlation parameter, which plays an important role to determine the overall performance of the system, is mathematically derived based on the general statistical reverberation model. Furthermore, the practical implementation methods to estimate sub-parameters required to estimate the correlation parameter are developed. The efficiency of the proposed enhancement algorithm is verified by performance evaluation. From the results, the proposed algorithm achieves significant performance improvement in all studied conditions and shows the superiority especially for the severely noisy and strongly reverberant environment.

Speech Problems of English Laterals by Korean Learners based on the acoustic Characteristics (한국인 영어 학습자의 설측음 발화의 문제점: 음향음성학적 특성을 중심으로)

  • Kim, Chong-Gu;Kim, Hyun-Gi;Jeon, Byung-Man
    • Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.127-138
    • /
    • 2000
  • The aim of this paper is to find the speech problems of English Laterals by Korean learners and to contribute to the effective pronunciation education with visualizing the pronunciation. In this paper we analyzed 18 words including lateral sounds which were divided into such as: initial, initial consonant cluster, intervocalic, final consonant cluster, and final. To analyse the words we used High speed speech analysis system. We examined acoustic characteristics of English lateral spectrogram by using voice sustained time(ms), FL1, FL2, FL3. Before we started, we had expected that the result would show us that the mother tongue interfere in the final sounds because we have similar sounds in Korea. The results of our experiments showed that initially, voice sustained time showed many more differences between Korean and native pronunciation. Also, it was seen that Korean pronunciation used the syllable structure of the own mother tongue. For instance, in the case of initial consonant cluster CCVC, Koreans often used CC as a syllable and VC as another. This was due to the mother tongue interference. For this reason in the intervocalic and in the final, we saw the differences between Korean and native. Therefore we have to accept the visualized analysis system in the instruction of pronunciation.

  • PDF

Single-Channel Speech Separation Using the Time-Frequency Smoothed Soft Mask Filter (시간-주파수 스무딩이 적용된 소프트 마스크 필터를 이용한 단일 채널 음성 분리)

  • Lee, Yun-Kyung;Kwon, Oh-Wook
    • MALSORI
    • /
    • no.67
    • /
    • pp.195-216
    • /
    • 2008
  • This paper addresses the problem of single-channel speech separation to extract the speech signal uttered by the speaker of interest from a mixture of speech signals. We propose to apply time-frequency smoothing to the existing statistical single-channel speech separation algorithms: The soft mask and the minimum-mean-square-error (MMSE) algorithms. In the proposed method, we use the two smoothing later. One is the uniform mask filter whose filter length is uniform at the time-Sequency domain, and the other is the met-scale filter whose filter length is met-scaled at the time domain. In our speech separation experiments, the uniform mask filter improves speaker-to-interference ratio (SIR) by 2.1dB and 1dB for the soft mask algorithm and the MMSE algorithm, respectively, whereas the mel-scale filter achieves 1.1dB and 0.8dB for the same algorithms.

  • PDF

Binary Mask Criteria Based on Distortion Constraints Induced by a Gain Function for Speech Enhancement

  • Kim, Gibak
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.4
    • /
    • pp.197-202
    • /
    • 2013
  • Large gains in speech intelligibility can be obtained using the SNR-based binary mask approach. This approach retains the time-frequency (T-F) units of the mixture signal, where the target signal is stronger than the interference noise (masker) (e.g., SNR > 0 dB), and removes the T-F units, where the interfering noise is dominant. This paper introduces two alternative binary masks based on the distortion constraints to improve the speech intelligibility. The distortion constraints are induced by a gain function for estimating the short-time spectral amplitude. One binary mask is designed to retain the speech underestimated (T-F) units while removing the speech overestimated (T-F)units. The other binary mask is designed to retain the noise overestimated (T-F) units while removing noise underestimated (T-F) units. Listening tests with oracle binary masks were conducted to assess the potential of the two binary masks in improving the intelligibility. The results suggested that the two binary masks based on distortion constraints can provide large gains in intelligibility when applied to noise-corrupted speech.

  • PDF

Construction or Speech Editing System for Speech Recognition. (음성 인식을 위한 편집시스템의 구성)

  • Song, D.S.;Lee, C.W.;Shin, C.W.;Jeong, J.S.;LEE, H.S.
    • Proceedings of the KIEE Conference
    • /
    • 1987.07b
    • /
    • pp.1583-1586
    • /
    • 1987
  • In the study for effective speech control we designed a personal computer system with A/D converter in which the speech signal is transformed by digital data displayed graphically on the moniter and with a D/A converter in which the digital data is transformed into speech signal which people can hear. We analyzed the character of the speech signal produced by the system. We designed the adaptive noise cancel algorithm so that noise and Interference are cancelled whenever the speech signal is recognized by the computer system. This is a basic system for artificial Intelligence.

  • PDF

Target Speaker Speech Restoration via Spectral bases Learning (주파수 특성 기저벡터 학습을 통한 특정화자 음성 복원)

  • Park, Sun-Ho;Yoo, Ji-Ho;Choi, Seung-Jin
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.3
    • /
    • pp.179-186
    • /
    • 2009
  • This paper proposes a target speech extraction which restores speech signal of a target speaker form noisy convolutive mixture of speech and an interference source. We assume that the target speaker is known and his/her utterances are available in the training time. Incorporating the additional information extracted from the training utterances into the separation, we combine convolutive blind source separation(CBSS) and non-negative decomposition techniques, e.g., probabilistic latent variable model. The nonnegative decomposition is used to learn a set of bases from the spectrogram of the training utterances, where the bases represent the spectral information corresponding to the target speaker. Based on the learned spectral bases, our method provides two postprocessing steps for CBSS. Channel selection step finds a desirable output channel from CBSS, which dominantly contains the target speech. Reconstruct step recovers the original spectrogram of the target speech from the selected output channel so that the remained interference source and background noise are suppressed. Experimental results show that our method substantially improves the separation results of CBSS and, as a result, successfully recovers the target speech.