• Title/Summary/Keyword: Speech signals

Search Result 497, Processing Time 0.039 seconds

Post-Processing of IVA-Based 2-Channel Blind Source Separation for Solving the Frequency Bin Permutation Problem (IVA 기반의 2채널 암묵적신호분리에서 주파수빈 뒤섞임 문제 해결을 위한 후처리 과정)

  • Chu, Zhihao;Bae, Keunsung
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.211-216
    • /
    • 2013
  • The IVA(Independent Vector Analysis) is a well-known FD-ICA method used to solve the frequency permutation problem. It generally works quite well for blind source separation problems, but still needs some improvements in the frequency bin permutation problem. This paper proposes a post-processing method which can improve the source separation performance with the IVA by fixing the remaining frequency permutation problem. The proposed method makes use of the correlation coefficient of power ratio between frequency bins for separated signals with the IVA-based 2-channel source separation. Experimental results verified that the proposed method could fix the remaining frequency permutation problem in the IVA and improve the speech quality of the separated signals.

A study on the recognition system of Korean phenemes using filter-Bank analysis (필터뱅크 분석법을 사용한 한국어 음소의 인식에 관한 연구)

  • 남문현;주상규
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1987.10b
    • /
    • pp.473-478
    • /
    • 1987
  • The purpose of this study is to design a phoneme-class recognition system for Korean language using filter-bank analysis and zero crossing rate method. First, the speech signals are separated in 16 bandpass filters to obtain short-time spectrum of speech signals, and digitized by 16-ch A/D converter. And then, with the set of features which extracted from patterns of ratios of each channel energy level to overall energy level, the decision rules are made for recognize unknown speech signal. In this experiment, the recognition rate was about 93.1 percent for 7 vowels under multitalker environment and 74.4 percent for 10 initial sounds at single speaker.

  • PDF

Quality Improvement of Bandwidth Extended Speech Using Mixed Excitation Model (혼합여기모델을 이용한 대역 확장된 음성신호의 음질 개선)

  • Choi Mu Yeol;Kim Hyung Soon
    • MALSORI
    • /
    • no.52
    • /
    • pp.133-144
    • /
    • 2004
  • The quality of narrowband speech can be enhanced by the bandwidth extension technology. This paper proposes a mixed excitation and an energy compensation method based on Gaussian Mixture Model (GMM). First, we employ the mixed excitation model having both periodic and aperiodic characteristics in frequency domain. We use a filter bank to extract the periodicity features from the filtered signals and model them based on GMM to estimate the mixed excitation. Second, we separate the acoustic space into the voiced and unvoiced parts of speech to compensate for the energy difference between narrowband speech and reconstructed highband, or lowband speech, more accurately. Objective and subjective evaluations show that the quality of wideband speech reconstructed by the proposed method is superior to that by the conventional bandwidth extension method.

  • PDF

Speech Enhancement Using Receding Horizon FIR Filtering

  • Kim, Pyung-Soo;Kwon, Wook-Hyu;Kwon, Oh-Kyu
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.1
    • /
    • pp.7-12
    • /
    • 2000
  • A new speech enhancement algorithm for speech corrupted by slowly varying additive colored noise is suggested based on a state-space signal model. Due to the FIR structure and the unimportance of long-term past information, the receding horizon (RH) FIR filter known to be a best linear unbiased estimation (BLUE) filter is utilized in order to obtain noise-suppressed speech signal. As a special case of the colored noise problem, the suggested approach is generalized to perform the single blind signal separation of two speech signals. It is shown that the exact speech signal is obtained when an incoming speech signal is noise-free.

  • PDF

Korean Broadcast News Transcription Using Morpheme-based Recognition Units

  • Kwon, Oh-Wook;Alex Waibel
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.1E
    • /
    • pp.3-11
    • /
    • 2002
  • Broadcast news transcription is one of the hardest tasks in speech recognition because broadcast speech signals have much variability in speech quality, channel and background conditions. We developed a Korean broadcast news speech recognizer. We used a morpheme-based dictionary and a language model to reduce the out-of·vocabulary (OOV) rate. We concatenated the original morpheme pairs of short length or high frequency in order to reduce insertion and deletion errors due to short morphemes. We used a lexicon with multiple pronunciations to reflect inter-morpheme pronunciation variations without severe modification of the search tree. By using the merged morpheme as recognition units, we achieved the OOV rate of 1.7% comparable to European languages with 64k vocabulary. We implemented a hidden Markov model-based recognizer with vocal tract length normalization and online speaker adaptation by maximum likelihood linear regression. Experimental results showed that the recognizer yielded 21.8% morpheme error rate for anchor speech and 31.6% for mostly noisy reporter speech.

Speech Intelligibility Analysis on the Vibration Sound of the Glass Window of a Conference Room (회의실 유리창 진동음의 음성 명료도 분석)

  • Kim, Hee-Dong;Kim, Yoon-Ho;Kim, Seock-Hyun
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.17 no.4 s.121
    • /
    • pp.363-369
    • /
    • 2007
  • The purpose of the study is to obtain acoustical information to prevent eavesdropping of the glass window. Speech intelligibility was investigated on the vibration sound detected from the glass window of a conference room. Objective test using speech transmission index(STI) was performed to estimate quantitatively the speech intelligibility. STI was determined based on tile modulation transfer function(MTF) of the room-glass window system. Using Maximum Length Sequency(MLS) signal as a sound source, impulse responses of the glass window and MTF were determined by signals from accelerometers and laser doppler vibrometer. Finally, speech intelligibility of the interior sound and window vibration were compared under different sound pressure levels and amplifier gains to confirm the effect of measurement condition on the speech intelligibility.

Single-Channel Speech Separation Using the Time-Frequency Smoothed Soft Mask Filter (시간-주파수 스무딩이 적용된 소프트 마스크 필터를 이용한 단일 채널 음성 분리)

  • Lee, Yun-Kyung;Kwon, Oh-Wook
    • MALSORI
    • /
    • no.67
    • /
    • pp.195-216
    • /
    • 2008
  • This paper addresses the problem of single-channel speech separation to extract the speech signal uttered by the speaker of interest from a mixture of speech signals. We propose to apply time-frequency smoothing to the existing statistical single-channel speech separation algorithms: The soft mask and the minimum-mean-square-error (MMSE) algorithms. In the proposed method, we use the two smoothing later. One is the uniform mask filter whose filter length is uniform at the time-Sequency domain, and the other is the met-scale filter whose filter length is met-scaled at the time domain. In our speech separation experiments, the uniform mask filter improves speaker-to-interference ratio (SIR) by 2.1dB and 1dB for the soft mask algorithm and the MMSE algorithm, respectively, whereas the mel-scale filter achieves 1.1dB and 0.8dB for the same algorithms.

  • PDF

Distorted Speech Rejection For Automatic Speech Recognition under CDMA Wireless Communication (CDMA이동통신환경에서의 음성인식을 위한 왜곡음성신호 거부방법)

  • Kim Nam Soo;Chang Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.8
    • /
    • pp.597-601
    • /
    • 2004
  • This paper introduces a pre-rejection technique for wireless channel distorted speech with application to automatic speech recognition (ASR) Based on analysis of distorted speech signals over a wireless communication channel. we propose a method to reject the channel distorted speech with a small computational load. From a number of simulation results. we can discover that tile pre-rejection algorithm enhances the robustness of speech recognition operation.

CASA-based Front-end Using Two-channel Speech for the Performance Improvement of Speech Recognition in Noisy Environments (잡음환경에서의 음성인식 성능 향상을 위한 이중채널 음성의 CASA 기반 전처리 방법)

  • Park, Ji-Hun;Yoon, Jae-Sam;Kim, Hong-Kook
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.289-290
    • /
    • 2007
  • In order to improve the performance of a speech recognition system in the presence of noise, we propose a noise robust front-end using two-channel speech signals by separating speech from noise based on the computational auditory scene analysis (CASA). The main cues for the separation are interaural time difference (ITD) and interaural level difference (ILD) between two-channel signal. As a result, we can extract 39 cepstral coefficients are extracted from separated speech components. It is shown from speech recognition experiments that proposed front-end has outperforms the ETSI front-end with single-channel speech.

  • PDF

Variations of Autocovariances of Speech and its related Signals in time, frequency and quefrency domains (음성 및 음성 관련 신호의 주파수 및 Quefrency 영역에서의 자기공분산 변화)

  • Kim, Seon-Il
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.05a
    • /
    • pp.340-343
    • /
    • 2011
  • To distinguish between a group of speech signals and nonspeech signals, you can use several features in domains like frequency, quefrency and time. It is very important to use features that differentiate two signal groups. As a feature to separate two signal groups, autocorrelation method was proposed and the variances between groups were studied. Autocovariances were just calculated for the time domain signal. Signals were divided into segments which consist of 128 data to be transformed to the frequency and quefrency domains. Autocovariances between each coefficient of segments in FFTs and quefrencies were found and they were averaged over wide spectrum. It is clear that the autocovariances in frequency domain show great differences between groups of signals.

  • PDF