• Title/Summary/Keyword: Noisy Speech

Search Result 395, Processing Time 0.024 seconds

Speaker Adaptation Using ICA-Based Feature Transformation

  • Jung, Ho-Young;Park, Man-Soo;Kim, Hoi-Rin;Hahn, Min-Soo
    • ETRI Journal
    • /
    • v.24 no.6
    • /
    • pp.469-472
    • /
    • 2002
  • Speaker adaptation techniques are generally used to reduce speaker differences in speech recognition. In this work, we focus on the features fitted to a linear regression-based speaker adaptation. These are obtained by feature transformation based on independent component analysis (ICA), and the feature transformation matrices are estimated from the training data and adaptation data. Since the adaptation data is not sufficient to reliably estimate the ICA-based feature transformation matrix, it is necessary to adjust the ICA-based feature transformation matrix estimated from a new speaker utterance. To cope with this problem, we propose a smoothing method through a linear interpolation between the speaker-independent (SI) feature transformation matrix and the speaker-dependent (SD) feature transformation matrix. From our experiments, we observed that the proposed method is more effective in the mismatched case. In the mismatched case, the adaptation performance is improved because the smoothed feature transformation matrix makes speaker adaptation using noisy speech more robust.

  • PDF

Performance Comparison of Speech Recognition Using Body-conducted Signals in Noisy Environment (소음 환경에서 body-conducted 신호를 이용한 음성인식 성능 비교)

  • Choi Dae-Lim;Lee Kwang-Hyun;Lee Yong-Ju;Kim Chong-Kyo
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.57-60
    • /
    • 2004
  • 본 논문에서는 음성정보기술산업지원센터(SiTEC)에서 현재 배포중인 고소음 환경 음성 DB를 이용하여 air-conducted 음성과 body-conducted 음성의 인식 성능을 비교 실험하였다. 소음 환경에서 일반적인 마이크로폰으로부터 수집된 air-conducted 음성은 잡음의 영향을 받기 쉬우며 이는 인식률을 저하시킨다. 반면에 진동 픽업 마이크로폰에서 수집된 body-conducted 음성은 소음에 보다 강인한 특성을 보인다. 이러한 특성에 근거하여 소음 환경에서 일반 다이나믹 마이크로폰 음성에 음질 개선 방법과 채널 보상 방법을 적용한 인식 결과와 3종류의 진동 픽업 마이크로폰에서 수집된 음성과의 인식 성능을 비교 분석하여 body-conducted 음성 인식 시스템의 환용 가능성을 살펴보았다.

  • PDF

Speech Enhancement Using Level Adapted Wavelet Packet with Adaptive Noise Estimation

  • Chang, Sung-Wook;Kwon, Young-Hun;Jung, Sung-Il;Yang, Sung-Il;Lee, Kun-Sang
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.2E
    • /
    • pp.87-92
    • /
    • 2003
  • In this paper, a new speech enhancement method using level adapted wavelet packet is presented. First, we propose a level adapted wavelet packet to alleviate a drawback of the conventional node adapted one in noisy environment. Next, we suggest an adaptive noise estimation method at each node on level adapted wavelet packet tree. Then, for more accurate noise component subtraction, we propose a new estimation method of spectral subtraction weight. Finally, we present a modified spectral subtraction method. The proposed method is evaluated on various noise conditions: speech babble noise, F-l6 cockpit noise, factory noise, pink noise, and Volvo car interior noise. For an objective evaluation, the SNR test was performed. Also, spectrogram test and a very simple listening test as a subjective evaluation were performed.

Speech Enhancement Using Blind Signal Separation Combined With Null Beamforming

  • Nam Seung-Hyon;Jr. Rodrigo C. Munoz
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.4E
    • /
    • pp.142-147
    • /
    • 2006
  • Blind signal separation is known as a powerful tool for enhancing noisy speech in many real world environments. In this paper, it is demonstrated that the performance of blind signal separation can be further improved by combining with a null beamformer (NBF). Cascading the blind source separation with null beamforming is equivalent to the decomposition of the received signals into the direct parts and reverberant parts. Investigation of beam patterns of the null beamformer and blind signal separation reveals that directional null of NBF reduces mainly direct parts of the unwanted signals whereas blind signal separation reduces reverberant parts. Further, it is shown that the decomposition of received signals can be exploited to solve the local stability problem. Therefore, faster and improved separation can be obtained by removing the direct parts first by null beamforming. Simulation results using real office recordings confirm the expectation.

Histogram Enhancement for Robust Speaker Verification (강인한 화자 확인을 위한 히스토그램 개선 기법)

  • Choi, Jae-Kil;Kwon, Chul-Hong
    • MALSORI
    • /
    • no.63
    • /
    • pp.153-170
    • /
    • 2007
  • It is well known that when there is an acoustic mismatch between the speech obtained during training and testing, the accuracy of speaker verification systems drastically deteriorates. This paper presents the use of MFCCs' histogram enhancement technique in order to improve the robustness of a speaker verification system. The technique transforms the features extracted from speech within an utterance such that their statistics conform to reference distributions. The reference distributions proposed in this paper are uniform distribution and beta distribution. The transformation modifies the contrast of MFCCs' histogram so that the performance of a speaker verification system is improved both in the clean training and testing environment and in the clean training and noisy testing environment.

  • PDF

A Study on the Speech Recognition for Commands of Ticketing Machine using CHMM (CHMM을 이용한 발매기 명령어의 음성인식에 관한 연구)

  • Kim, Beom-Seung;Kim, Soon-Hyob
    • Journal of the Korean Society for Railway
    • /
    • v.12 no.2
    • /
    • pp.285-290
    • /
    • 2009
  • This paper implemented a Speech Recognition System in order to recognize Commands of Ticketing Machine (314 station-names) at real-time using Continuous Hidden Markov Model. Used 39 MFCC at feature vectors and For the improvement of recognition rate composed 895 tied-state triphone models. System performance valuation result of the multi-speaker-dependent recognition rate and the multi-speaker-independent recognition rate is 99.24% and 98.02% respectively. In the noisy environment the recognition rate is 93.91%.

Speech Identification of Male and Female Speakers in Noisy Speech for Improving Performance of Speech Recognition System (음성인식 시스템의 성능 향상을 위한 잡음음성의 남성 및 여성화자의 음성식별)

  • Choi, Jae-seung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.619-620
    • /
    • 2017
  • 본 논문에서는 음성인식 알고리즘에 매우 중요한 정보를 제공하는 화자의 성별인식을 위하여 신경회로망을 사용하여 잡음 환경 하에서 남성음성 및 여성음성의 화자를 식별하는 성별인식 알고리즘을 제안한다. 본 논문에서 제안하는 신경회로망은 MFCC의 계수를 사용하여 음성의 각 구간에서 남성음성 및 여성음성의 화자를 인식할 수 있는 알고리즘이다. 실험결과로부터 백색잡음이 중첩된 잡음환경 하에서 음성신호의 MFCC의 특징벡터를 사용함으로써 남성음성 및 여성음성의 화자에 대해서 양호한 성별인식 결과가 구해졌다.

  • PDF

Independent Component Analysis on a Subband Domain for Robust Speech Recognition (음성의 특징 단계에 독립 요소 해석 기법의 효율적 적용을 통한 잡음 음성 인식)

  • Park, Hyeong-Min;Jeong, Ho-Yeong;Lee, Tae-Won;Lee, Su-Yeong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.6
    • /
    • pp.22-31
    • /
    • 2000
  • In this paper, we propose a method for removing noise components in the feature extraction process for robust speech recognition. This method is based on blind separation using independent component analysis (ICA). Given two noisy speech recordings the algorithm linearly separates speech from the unwanted noise signal. To apply ICA as closely as possible to the feature level for recognition, a new spectral analysis is presented. It modifies the computation of band energies by previously averaging out fast Fourier transform (FFT) points in several divided ranges within one met-scaled band. The simple analysis using sample variances of band energies of speech and noise, and recognition experiments showed its noise robustness. For noisy speech signals recorded in real environments, the proposed method which applies ICA to the new spectral analysis improved the recognition performances to a considerable extent, and was particularly effective for low signal-to-noise ratios (SNRs). This method gives some insights into applying ICA to feature levels and appears useful for robust speech recognition.

  • PDF

Real Time Environmental Classification Algorithm Using Neural Network for Hearing Aids (인공 신경망을 이용한 보청기용 실시간 환경분류 알고리즘)

  • Seo, Sangwan;Yook, Sunhyun;Nam, Kyoung Won;Han, Jonghee;Kwon, See Youn;Hong, Sung Hwa;Kim, Dongwook;Lee, Sangmin;Jang, Dong Pyo;Kim, In Young
    • Journal of Biomedical Engineering Research
    • /
    • v.34 no.1
    • /
    • pp.8-13
    • /
    • 2013
  • Persons with sensorineural hearing impairment have troubles in hearing at noisy environments because of their deteriorated hearing levels and low-spectral resolution of the auditory system and therefore, they use hearing aids to compensate weakened hearing abilities. Various algorithms for hearing loss compensation and environmental noise reduction have been implemented in the hearing aid; however, the performance of these algorithms vary in accordance with external sound situations and therefore, it is important to tune the operation of the hearing aid appropriately in accordance with a wide variety of sound situations. In this study, a sound classification algorithm that can be applied to the hearing aid was suggested. The proposed algorithm can classify the different types of speech situations into four categories: 1) speech-only, 2) noise-only, 3) speech-in-noise, and 4) music-only. The proposed classification algorithm consists of two sub-parts: a feature extractor and a speech situation classifier. The former extracts seven characteristic features - short time energy and zero crossing rate in the time domain; spectral centroid, spectral flux and spectral roll-off in the frequency domain; mel frequency cepstral coefficients and power values of mel bands - from the recent input signals of two microphones, and the latter classifies the current speech situation. The experimental results showed that the proposed algorithm could classify the kinds of speech situations with an accuracy of over 94.4%. Based on these results, we believe that the proposed algorithm can be applied to the hearing aid to improve speech intelligibility in noisy environments.

Cepstral Distance and Log-Energy Based Silence Feature Normalization for Robust Speech Recognition (강인한 음성인식을 위한 켑스트럼 거리와 로그 에너지 기반 묵음 특징 정규화)

  • Shen, Guang-Hu;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.278-285
    • /
    • 2010
  • The difference between training and test environments is one of the major performance degradation factors in noisy speech recognition and many silence feature normalization methods were proposed to solve this inconsistency. Conventional silence feature normalization method represents higher classification performance in higher SNR, but it has a problem of performance degradation in low SNR due to the low accuracy of speech/silence classification. On the other hand, cepstral distance represents well the characteristic distribution of speech/silence (or noise) in low SNR. In this paper, we propose a Cepstral distance and Log-energy based Silence Feature Normalization (CLSFN) method which uses both log-energy and cepstral euclidean distance to classify speech/silence for better performance. Because the proposed method reflects both the merit of log energy being less affected with noise in high SNR and the merit of cepstral distance having high discrimination accuracy for speech/silence classification in low SNR, the classification accuracy will be considered to be improved. The experimental results showed that our proposed CLSFN presented the improved recognition performances comparing with the conventional SFN-I/II and CSFN methods in all kinds of noisy environments.