• Title/Summary/Keyword: Voiced/unvoiced classification

Search Result 21, Processing Time 0.021 seconds

Improving LD-CELP using frame classification and modified synthesis filter (프레임 분류와 합성필터의 변형을 이용한 적은 지연을 갖는 음성 부호화기의 성능)

  • 임은희;이주호;김형명
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.6
    • /
    • pp.1430-1437
    • /
    • 1996
  • A low delay code excited linear predictive speech coder(LD-CELP) at bit rates under 8kbps is considered. We try to improve the perfomance of speech coder with frame type dependent modification of synthesis filter. We first classify frames into 3 groups: voiced, unvoiced and onset. For voicedand unvoiced frame, the spectral envelope of the synthesis filter is adapted to the phonetic characteristics. For transition frame from unvoiced to voiced, the synthesis filter which has been interpolated with the bias filter is used. The proposed vocoder produced more clear sound with similar delay level than other pre-existing LD-CELP vocoders.

  • PDF

Variable Rate IMBE-LP Coding Algorithm Using Band Information (주파수대역 정보를 이용한 가변률 IMBE-LP 음성부호화 알고리즘)

  • Park, Man-Ho;Bae, Geon-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.5
    • /
    • pp.576-582
    • /
    • 2001
  • The Multi-Band Excitation(MBE) speech coder uses a different approach for the representation of the excitation signal. It replaces the frame-based single voiced/unvoiced classification of a classical speech coder with a set of such decision over harmonic intervals in the frequency domain. This enables each speech segment to be a mixture of voiced and unvoiced, and improves the synthetic speech quality by reducing decision errors that might occur on the frame-based single voiced and unvoiced decision process when input speech is degraded with noise. The IMBE-LP, improved version of MBE with linear prediction, represents the spectral information of MBE model with linear prediction coefficients to obtain low bit rate of 2.4 kbps. In this Paper, we proposed a variable rate IMBE-LP vocoder that has lower bit rate than IMBE-LP without degrading the synthetic speech quality. To determine the LP order, it uses the spectral band information of the MBE model that has something to do with he input speech's characteristics. Experimental results are riven with our findings and discussions.

  • PDF

Voiced/Unvoiced/Silence Classification of Speech Signal Using Wavelet Transform (웨이브렛 변환을 이용한 음성신호의 유성음/무성음/묵음 분류)

  • 손영호
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.449-453
    • /
    • 1998
  • 일반적으로 음성신호는 파형의 특성에 따라 파형이 준주기적인 유성음과 주기성 없이 잡음과 유사한 무성음 그리고 배경 잡음에 해당하는 묵음의 세 종류로 분류된다. 기존의 유성음/무성음/묵음 분류 방법에서는 피치정보, 에너지 및 영교차율 등이 분류를 위한 파라미터로 널리 사용되었다. 본 논문에서는 음성신호를 웨이브렛 변환한 신호에서 스펙트럼상에서이 변화를 파라미터로 하는 유성음/무성음/묵음 분류 알고리즘을 제안하고 제안된 알고리즘으로 검출한 결과와 이에 따른 문제점을 검토하였다.

  • PDF

An Automatic Segmentation System Based on HMM and Correction Algorithm (HMM 및 보정 알고리즘을 이용한 자동 음성 분할 시스템)

  • Kim, Mu-Jung;Kwon, Chul-Hong
    • Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.265-274
    • /
    • 2002
  • In this paper we propose an automatic segmentation system that outputs the time alignment information of phoneme boundary using Viterbi search with HMM (Hidden Markov Model) and corrects these results by an UVS (unvoiced/voiced/silence) classification algorithm. We selecte a set of 39 monophones and a set of 647 extended phones for HMM models. For the UVS classification we use the feature parameters such as ZCR (Zero Crossing Rate), log energy, spectral distribution. The result of forced alignment using the extended phone set is 11% better than that of the monophone set. The UVS classification algorithm shows high performance to correct the segmentation results.

  • PDF

An Experiment of a Spoken Digits-Recognition System (숫자음성 자동 인식에 관한 일실험)

  • ;安居院猛
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.15 no.6
    • /
    • pp.23-28
    • /
    • 1978
  • This paper describes a speech recognition system for ten isolated spoken digits. In this system, acoustic parameters such as zero crossing rate, log energy and three formant frequencies estimated by linear prediction method were extracted for classification and/or recognition purpose(s). The former two parameters were used for the classification of unvoiced consonants and the latter one for the recognition of vowels and voiced consonants. Promising recognition results were obtained in this experiment for ten digit utterances spoken by a male speaker.

  • PDF

A Study on the Phonemic Analysis for Korean Speech Segmentation (한국어 음소분리에 관한 연구)

  • Lee, Sou-Kil;Song, Jeong-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.4E
    • /
    • pp.134-139
    • /
    • 2004
  • It is generally known that accurate segmentation is very necessary for both an individual word and continuous utterances in speech recognition. It is also commonly known that techniques are now being developed to classify the voiced and the unvoiced, also classifying the plosives and the fricatives. The method for accurate recognition of the phonemes isn't yet scientifically established. Therefore, in this study we analyze the Korean language, using the classification of 'Hunminjeongeum' and contemporary phonetics, with the frequency band, Mel band and Mel Cepstrum, we extract notable features of the phonemes from Korean speech and segment speech by the unit of the phonemes to normalize them. Finally, through the analysis and verification, we intend to set up Phonemic Segmentation System that will make us able to adapt it to both an individual word and continuous utterances.

A study on the Voiced, Unvoiced and Silence Classification (유, 무성음 및 묵음 식별에 관한 연구)

  • 김명환;김순협
    • The Journal of the Acoustical Society of Korea
    • /
    • v.3 no.2
    • /
    • pp.46-58
    • /
    • 1984
  • 본 논문은 한국어 음성 인식을 위한 유성음, 무성음, 묵음 식별에 관한 연구이다. 주어진 음성 구간을 3가지 음성 신호 부류로 식별하기 위하여 패턴 인식 방법을 사용하였다. 여기에 사용한 분석 파 라메타는 음성 신호의 영교차율, 대수 에너지, 정규화 된 첫 번째 자동 상관 계수, 선형 예측 분석에서 얻은 첫 번째 예측 계수, 그리고 예측 오차의 에너지이다. 한편 측정된 파라메타들이 다차원 가우스 확 률 밀도 함수에 따라 분산되었다는 가정하에서 어어진 최소 거리 법칙에 기본을 두고 음성 구간을 결정 하였다. 측정된 파라메타들을 여러 가지 방법으로 조합하여 식별한 결과 영교차율, 첫 번째 예측계수, 예측 오차의 에너지를 측정 파라메타로 사용했을 때 1%보다 적은 식별 오차율을 얻었다.

  • PDF

Subband Based Spectrum Subtraction Algorithm (서브밴드에 기반한 스펙트럼 차감 알고리즘)

  • Choi, Jae-Seung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.4
    • /
    • pp.555-560
    • /
    • 2013
  • This paper first proposes a classification algorithm which detects a voiced, unvoiced, and silence signal using distance measure, logarithm power and root mean square methods at each frame, then a spectrum subtraction algorithm based on a subband filter. The proposed algorithm subtracts spectrums of white noise and street noise from noisy signal based on the subband filter at each frame. In this experiment, experimental results of the proposed spectrum subtraction algorithm demonstrate using the speech and noise data of Aurora-2 database. Based on measuring the speech-to-noise ratio (SNR), experiments confirm that the proposed algorithm is effective for the speech by contaminated the noise. From the experiments, the improvement in the output SNR values was approximately 2.1 dB and 1.91 dB better for white noise and street noise, respectively.

A Study on Extracting Valid Speech Sounds by the Discrete Wavelet Transform (이산 웨이브렛 변환을 이용한 유효 음성 추출에 관한 연구)

  • Kim, Jin-Ok;Hwang, Dae-Jun;Baek, Han-Uk;Jeong, Jin-Hyeon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.2
    • /
    • pp.231-236
    • /
    • 2002
  • The classification of the speech-sound block comes from the multi-resolution analysis property of the discrete wavelet transform, which is used to reduce the computational time for the pre-processing of speech recognition. The merging algorithm is proposed to extract vapid speech-sounds in terms of position and frequency range. It performs unvoiced/voiced classification and denoising. Since the merging algorithm can decide the processing parameters relating to voices only and is independent of system noises, it is useful for extracting valid speech-sounds. The merging algorithm has an adaptive feature for arbitrary system noises and an excellent denoising signal-to-noise ratio and a useful system tuning for the system implementation.

A Merging Algorithm with the Discrete Wavelet Transform to Extract Valid Speech-Sounds (이산 웨이브렛 변환을 이용한 유효 음성 추출을 위한 머징 알고리즘)

  • Kim, Jin-Ok;Hwang, Dae-Jun;Paek, Han-Wook;Chung, Chin-Hyun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.3
    • /
    • pp.289-294
    • /
    • 2002
  • A valid speech-sound block can be classified to provide important information for speech recognition. The classification of the speech-sound block comes from the MRA(multi-resolution analysis) property of the DWT(discrete wavelet transform), which is used to reduce the computational time for the pre-processing of speech recognition. The merging algorithm is proposed to extract valid speech-sounds in terms of position and frequency range. It needs some numerical methods for an adaptive DWT implementation and performs unvoiced/voiced classification and denoising. Since the merging algorithm can decide the processing parameters relating to voices only and is independent of system noises, it is useful for extracting valid speech-sounds. The merging algorithm has an adaptive feature for arbitrary system noises and an excellent denoising SNR(signal-to-nolle ratio).