• Title/Summary/Keyword: voiced/unvoiced classification

Search Result 21, Processing Time 0.02 seconds

Voiced/Unvoiced/Silence Classification웨 of Speech Signal Using Wavelet Transform (웨이브렛 변환을 이용한 음성신호의 유성음/무성음/묵음 분류)

  • Son, Young-Ho;Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.4 no.2
    • /
    • pp.41-54
    • /
    • 1998
  • Speech signals are, depending on the characteristics of waveform, classified as voiced sound, unvoiced sound, and silence. Voiced sound, produced by an air flow generated by the vibration of the vocal cords, is quasi-periodic, while unvoiced sound, produced by a turbulent air flow passed through some constriction in the vocal tract, is noise-like. Silence represents the ambient noise signal during the absence of speech. The need for deciding whether a given segment of a speech waveform should be classified as voiced, unvoiced, or silence has arisen in many speech analysis systems. In this paper, a voiced/unvoiced/silence classification algorithm using spectral change in the wavelet transformed signal is proposed and then, experimental results are demonstrated with our discussions.

  • PDF

Voiced, Unvoiced, and Silence Classification of human speech signals by enphasis characteristics of spectrum (Spectrum 강조특성을 이용한 음성신호에서 Voicd - Unvoiced - Silence 분류)

  • 배명수;안수길
    • The Journal of the Acoustical Society of Korea
    • /
    • v.4 no.1
    • /
    • pp.9-15
    • /
    • 1985
  • In this paper, we describe a new algorithm for deciding whether a given segment of a speech signal is classified as voiced speech, unvoiced speech, or silence, based on parameters made on the signal. The measured parameters for the voiced-unvoiced classfication are the areas of each Zero crossing interval, which is given by multiplication of the magnitude by the inverse zero corssing rate of speech signals. The employed parameter for the unvoiced-silence classification, also, are each of positive area summation during four milisecond interval for the high frequency emphasized speech signals.

  • PDF

A Study on the Voiced, Unvoiced and Silence Classification (유.무성음 및 묵음 식별에 관한 연구)

  • 김명환
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1984.12a
    • /
    • pp.73-77
    • /
    • 1984
  • This paper reports on a Voiced-Unvoiced-Silence Classification of speech for Korean Speech Recognition. In this paper, it is describe a method which uses a Pattern Recognition Technique for classifying a given speech segment into the three classes. Best result is obtained with the combination using ZCR, P1, Ep and classification error rate is less than 1%.

  • PDF

Enhancement Voiced/Unvoiced Sounds Classification for 3GPP2 SMV Employing GMM (3GPP2 SMV의 실시간 유/무성음 분류 성능 향상을 위한 Gaussian Mixture Model 기반 연구)

  • Song, Ji-Hyun;Chang, Joon-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.5
    • /
    • pp.111-117
    • /
    • 2008
  • In this paper, we propose an approach to improve the performance of voiced/unvoiced (V/UV) decision under background noise environments for the selectable mode vocoder (SMV) of 3GPP2. We first present an effective analysis of the features and the classification method adopted in the SMV. And then feature vectors which are applied to the GMM are selected from relevant parameters of the SMV for the efficient voiced/unvoiced classification. For the purpose of evaluating the performance of the proposed algorithm, different experiments were carried out under various noise environments and yields better results compared with the conventional scheme of the SMV.

Real-time implementation and performance evaluation of speech classifiers in speech analysis-synthesis

  • Kumar, Sandeep
    • ETRI Journal
    • /
    • v.43 no.1
    • /
    • pp.82-94
    • /
    • 2021
  • In this work, six voiced/unvoiced speech classifiers based on the autocorrelation function (ACF), average magnitude difference function (AMDF), cepstrum, weighted ACF (WACF), zero crossing rate and energy of the signal (ZCR-E), and neural networks (NNs) have been simulated and implemented in real time using the TMS320C6713 DSP starter kit. These speech classifiers have been integrated into a linear-predictive-coding-based speech analysis-synthesis system and their performance has been compared in terms of the percentage of the voiced/unvoiced classification accuracy, speech quality, and computation time. The results of the percentage of the voiced/unvoiced classification accuracy and speech quality show that the NN-based speech classifier performs better than the ACF-, AMDF-, cepstrum-, WACF- and ZCR-E-based speech classifiers for both clean and noisy environments. The computation time results show that the AMDF-based speech classifier is computationally simple, and thus its computation time is less than that of other speech classifiers, while that of the NN-based speech classifier is greater compared with other classifiers.

Voiced/Unvoiced/Silence Classification of Speech Signal by Level Crossing and DPCM (Level Crossing과 DPCM을 사용한 유성음/무성음/묵음의 분류)

  • Kim, Jin-Young;Sung, Koeng-Mo
    • Proceedings of the KIEE Conference
    • /
    • 1987.07b
    • /
    • pp.1615-1618
    • /
    • 1987
  • This paper proposes new algorithm for classifying speech signal frame into voiced, unvoiced, silence frame, using the parameters extracted from time domain behavior of speech signal The prameters used in this paper are absolute magnitude, the sum of peaks lager than reference level (T-peak), the ratio of T-peak to absolute magnitude and the magnitude of signal outputs of DPCM. Using this parameters, speech signal is more easily classified into voiced/ unvoiced/silence frame.

  • PDF

Improvement of an Automatic Segmentation for TTS Using Voiced/Unvoiced/Silence Information (유/무성/묵음 정보를 이용한 TTS용 자동음소분할기 성능향상)

  • Kim Min-Je;Lee Jung-Chul;Kim Jong-Jin
    • MALSORI
    • /
    • no.58
    • /
    • pp.67-81
    • /
    • 2006
  • For a large corpus of time-aligned data, HMM based approaches are most widely used for automatic segmentation, providing a consistent and accurate phone labeling scheme. There are two methods for training in HMM. Flat starting method has a property that human interference is minimized but it has low accuracy. Bootstrap method has a high accuracy, but it has a defect that manual segmentation is required In this paper, a new algorithm is proposed to minimize manual work and to improve the performance of automatic segmentation. At first phase, voiced, unvoiced and silence classification is performed for each speech data frame. At second phase, the phoneme sequence is aligned dynamically to the voiced/unvoiced/silence sequence according to the acoustic phonetic rules. Finally, using these segmented speech data as a bootstrap, phoneme model parameters based on HMM are trained. For the performance test, hand labeled ETRI speech DB was used. The experiment results showed that our algorithm achieved 10% improvement of segmentation accuracy within 20 ms tolerable error range. Especially for the unvoiced consonants, it showed 30% improvement.

  • PDF

Speaker Recognition Performance Improvement by Voiced/Unvoiced Classification and Heterogeneous Feature Combination (유/무성음 구분 및 이종적 특징 파라미터 결합을 이용한 화자인식 성능 개선)

  • Kang, Jihoon;Jeong, Sangbae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.6
    • /
    • pp.1294-1301
    • /
    • 2014
  • In this paper, separate probabilistic distribution models for voiced and unvoiced speech are estimated and utilized to improve speaker recognition performance. Also, in addition to the conventional mel-frequency cepstral coefficient, skewness, kurtosis, and harmonic-to-noise ratio are extracted and used for voiced speech intervals. Two kinds of scores for voiced and unvoiced speech are linearly fused with the optimal weight found by exhaustive search. The performance of the proposed speaker recognizer is compared with that of the conventional recognizer which uses mel-frequency cepstral coefficient and a unified probabilistic distribution function based on the Gassian mixture model. Experimental results show that the lower the number of Gaussian mixture, the greater the performance improvement by the proposed algorithm.

A Study Of The Meaningful Speech Sound Block Classification Based On The Discrete Wavelet Transform (Discrete Wavelet Transform을 이용한 음성 추출에 관한 연구)

  • Baek, Han-Wook;Chung, Chin-Hyun
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.2905-2907
    • /
    • 1999
  • The meaningful speech sound block classification provides very important information in the speech recognition. The following technique of the classification is based on the DWT (discrete wavelet transform), which will provide a more fast algorithm and a useful, compact solution for the pre-processing of speech recognition. The algorithm is implemented to the unvoiced/voiced classification and the denoising.

  • PDF