• Title/Summary/Keyword: Speech Signal Model

Search Result 224, Processing Time 0.029 seconds

A Study on Speech Separation using Sinusoidal Model and Psycoacoustics Model (정현파 모델과 사이코어쿠스틱스 모델을 이용한 음성 분리에 관한 연구)

  • Hwang, Sun-Il;Han, Doo-Jin;Kwon, Chul-Hyun;Shin, Dae-Kyu;Park, Sang-Hui
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2622-2624
    • /
    • 2001
  • In this thesis, speaker separation is employed when speech from two talkers has been summed into one signal and it is desirable to recover one or both of the speech signals from the composite signal. This paper proposed the method that separated the summed speeches and proved the similarity between the signals by the cross correlation between the signals for exact between original signal and separated signal. This paper uses frequency sampling method based on sinusoidal model to separate the composite signal with vocalic speech and vocalic speech and noise masking method based on psycoacoustics model to separate the composite signal with vocalic speech and nonvocalic speech.

  • PDF

Split Model Speech Analysis Techniques for Wideband Speech Signal

  • Park YoungHo;Ham MyungKyu;You KwangBock;Bae MyungJin
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.20-23
    • /
    • 1999
  • In this paper, The Split Model Analysis Algorithm, which can generate the wideband speech signal from the spectral information of narrowband signal, is developed. The Split Model Analysis Algorithm deals with the separation of the $10^{th}$ order LPC model into five cascade-connected $2^{nd}$ order model. The use of the less complex $2^{nd}$ order models allows for the exclusion of the complicated nonlinear relationships between model parameters and all the poles of the LPC model. The relationships between the model parameters and its corresponding analog poles is proved and applied to each $2^{nd}$ order model. The wideband speech signal is obtained by changing only the sampling rate

  • PDF

Split Model Speech Analysis Techniques for Speech Signal Enhancement

  • Park, Young-Ho;You, Kwang-Bock;Bae, Myung-Jin
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.1135-1138
    • /
    • 1999
  • In this paper, The Split Model Analysis Algorithm, which can generate the wideband speech signal from the spectral information of narrowband signal, is developed. The Split Model Analysis Algorithm deals with the separation of the 10$\^$th/ order LPC model into five cascade-connected 2$\^$nd/ order model. The use of the less complex 2$\^$nd/ order models allows for the exclusion of the complicated nonlinear relationships between model parameters and all the poles of the LPC model. The relationships between the model parameters and its corresponding analog poles is proved and applied to each 2$\^$nd/ order model. The wideband speech signal is obtained by changing only the sampling rate.

  • PDF

HMM Based Endpoint Detection for Speech Signals

  • Lee Yonghyung;Oh Changhyuck
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2001.11a
    • /
    • pp.75-76
    • /
    • 2001
  • An endpoint detection method for speech signals utilizing hidden Markov model(HMM) is proposed. It turns out that the proposed algorithm is quite satisfactory to apply isolated word speech recognition.

  • PDF

IMBE Model Based SNR Estimation of Continuous Speech Signals (연속음성신호에서 IMBE 모델을 이용한 SNR 추정 연구)

  • Park, Hyung-Woo;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.2
    • /
    • pp.148-153
    • /
    • 2010
  • In speech signal processing, speech signal corrupted by noise should be enhanced to improve quality. Usually noise estimation methods need flexibility for variable environment. Noise profile is renewed on silence region to avoid effects of speech properties. So we have to preprocess finding voice region before noise estimation. However, if received signal does not have silence region, we cannot apply that method. In this paper, we proposed SNR estimation method for continuous speech signal. A Speech signal consists of Voice and Unvoiced Band in The MBE excitation model. And the energy of speech signal is mostly distributed on voiced region, so we can estimate SNR by the ratio of voiced region energy to unvoiced. We use the IMBE vocoder for the Voice or Unvoice band of segmented speech signal. Continuously we calculate the segmented SNR using that information and the energy of each band. And we estimate the SNR of continuous speech signal.

Speech Recognition Performance Improvement using Gamma-tone Feature Extraction Acoustic Model (감마톤 특징 추출 음향 모델을 이용한 음성 인식 성능 향상)

  • Ahn, Chan-Shik;Choi, Ki-Ho
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.209-214
    • /
    • 2013
  • Improve the recognition performance of speech recognition systems as a method for recognizing human listening skills were incorporated into the system. In noisy environments by separating the speech signal and noise, select the desired speech signal. but In terms of practical performance of speech recognition systems are factors. According to recognized environmental changes due to noise speech detection is not accurate and learning model does not match. In this paper, to improve the speech recognition feature extraction using gamma tone and learning model using acoustic model was proposed. The proposed method the feature extraction using auditory scene analysis for human auditory perception was reflected In the process of learning models for recognition. For performance evaluation in noisy environments, -10dB, -5dB noise in the signal was performed to remove 3.12dB, 2.04dB SNR improvement in performance was confirmed.

Performance Improvement in the Multi-Model Based Speech Recognizer for Continuous Noisy Speech Recognition (연속 잡음 음성 인식을 위한 다 모델 기반 인식기의 성능 향상에 대한 연구)

  • Chung, Yong-Joo
    • Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.55-65
    • /
    • 2008
  • Recently, the multi-model based speech recognizer has been used quite successfully for noisy speech recognition. For the selection of the reference HMM (hidden Markov model) which best matches the noise type and SNR (signal to noise ratio) of the input testing speech, the estimation of the SNR value using the VAD (voice activity detection) algorithm and the classification of the noise type based on the GMM (Gaussian mixture model) have been done separately in the multi-model framework. As the SNR estimation process is vulnerable to errors, we propose an efficient method which can classify simultaneously the SNR values and noise types. The KL (Kullback-Leibler) distance between the single Gaussian distributions for the noise signal during the training and testing is utilized for the classification. The recognition experiments have been done on the Aurora 2 database showing the usefulness of the model compensation method in the multi-model based speech recognizer. We could also see that further performance improvement was achievable by combining the probability density function of the MCT (multi-condition training) with that of the reference HMM compensated by the D-JA (data-driven Jacobian adaptation) in the multi-model based speech recognizer.

  • PDF

Recursive Estimation using the Hidden Filter Model for Enhancing Noisy Speech

  • Kang, Yeong-Tae
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.3E
    • /
    • pp.27-30
    • /
    • 1996
  • A recursive estimation for the enhancement of white noise contaminated speech is proposed. This method is based on the Kalman filter with time-varying parametric model for the clean speech signal. Then, hidden filter model are used to model the clean speech signal. An approximation improvement of 4-5 dB in SNR is achieved at 5 and 10 dB input SNR, respectively.

  • PDF

Speech Enhancement Using Receding Horizon FIR Filtering

  • Kim, Pyung-Soo;Kwon, Wook-Hyu;Kwon, Oh-Kyu
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.1
    • /
    • pp.7-12
    • /
    • 2000
  • A new speech enhancement algorithm for speech corrupted by slowly varying additive colored noise is suggested based on a state-space signal model. Due to the FIR structure and the unimportance of long-term past information, the receding horizon (RH) FIR filter known to be a best linear unbiased estimation (BLUE) filter is utilized in order to obtain noise-suppressed speech signal. As a special case of the colored noise problem, the suggested approach is generalized to perform the single blind signal separation of two speech signals. It is shown that the exact speech signal is obtained when an incoming speech signal is noise-free.

  • PDF

A Study on Speech Separation in Cochannel using Sinusoidal Model (Sinusoidal Model을 이용한 Cochannel상에서의 음성분리에 관한 연구)

  • Park, Hyun-Gyu;Shin, Joong-In;Park, Sang-Hee
    • Proceedings of the KIEE Conference
    • /
    • 1997.11a
    • /
    • pp.597-599
    • /
    • 1997
  • Cochannel speaker separation is employed when speech from two talkers has been summed into one signal and it is desirable to recover one or both of the speech signals from the composite signal. Cochannel speech occurs in many common situations such as when two AM signals containing speech are transmitted on the same frequency or when two people are speaking simultaneously (e. g., when talking on the telephone). In this paper, the method that separated the speech in such a situation is proposed. Especially, only the voiced sound of few sound states is separated. And the similarity of the signals by the cross correlation between the signals for exactness of original signal and separated signal is proved.

  • PDF