• Title/Summary/Keyword: Speech improvement

Search Result 610, Processing Time 0.021 seconds

Performance improvement and Realtime implementation in CELP Coder (CELP 보코더의 성능 개선 및 실시간 구현)

  • 정창경
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06c
    • /
    • pp.199-204
    • /
    • 1994
  • In this paper, we researched abut CELP speech coding algorithm using efficlent pseudo-stochastic block codes, adaptive-codebook and improved fixed-gain codebook. The pseudo-stochastic block codes refer to stochastically populated block codes in which the adjacent codewords in an innovation codebook are non-independent. The adaptive-codebook was made with previous prediction speech data by storage-shift register. This CELP coding algorithm enables the coding of toll quality speech at bit rates from 4.8kbits/s to 9.6 kbits/s. This algorithm was realized TMS320C30 microprocessor in realtime.

  • PDF

Speech Quality Improvement by Speech Quality Evaluation (한국어 음성합성기 성능평가에 의한 합성 음질개선)

  • Yang Hee-Sik;Hahn Minsoo;Kim Jong-Jin
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.37-40
    • /
    • 2002
  • 본 논문에서는 한국어 합성기의 명료도 및 자연성 평가방안에 대한 개략적인 설명과 이 방안을 실제로 2종류의 서로 다른 한국어 합성기에 적용한 결과를 요약하였다. 한편, 이러한 평가결과를 바탕으로 실제로 이루어진 음질 개선 실 예를 소개하는 한편 향후 한국어 합성기의 성능 개선 방향을 제안하였다.

  • PDF

A Novel Algorithm for Discrimination of Voiced Sounds (유성음 구간 검출 알고리즘에 관한 연구)

  • Jang, Gyu-Cheol;Woo, Soo-Young;Yoo, Chang-D.
    • Speech Sciences
    • /
    • v.9 no.3
    • /
    • pp.35-45
    • /
    • 2002
  • A simple algorithm for discriminating voiced sounds in a speech is proposed. In addition to low-frequency energy and zero-crossing rate (ZCR), both of which have been widely used in the past for identifying voiced sounds, the proposed algorithm incorporates pitch variation to improve the discrimination rate. Based on TIMIT corpus, evaluation result shows an improvement of 13% in the discrimination of voiced phonemes over that of the traditional algorithm using only energy and ZCR.

  • PDF

Improved Excitation Coding for 13 kbps Variable Rate QCELP Coder

  • Kang, Sangwon;Lee, Dong-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.3E
    • /
    • pp.3-6
    • /
    • 1997
  • This paper reports on the optimal design of the excitation codebook in the 13 kbps variable rate QCELP coder of Korean speech. We present two optimal excitation codebooks which consist of 128 and 556 samples, respectively. For the design and test of the improved codebook, a data base of Korean speech is used. A quasi-Newton optimization algorithm was developed to design the codebook. The optimized codebook which remains sparse, can produce an average gain of 0.84 and 0.45 dB in SNR and SEGSNR respectively. Informal listening tests confirm the improvement in speech quality.

  • PDF

Voice Recognition Performance Improvement using the Convergence of Voice signal Feature and Silence Feature Normalization in Cepstrum Feature Distribution (음성 신호 특징과 셉스트럽 특징 분포에서 묵음 특징 정규화를 융합한 음성 인식 성능 향상)

  • Hwang, Jae-Cheon
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.5
    • /
    • pp.13-17
    • /
    • 2017
  • Existing Speech feature extracting method in speech Signal, there are incorrect recognition rates due to incorrect speech which is not clear threshold value. In this article, the modeling method for improving speech recognition performance that combines the feature extraction for speech and silence characteristics normalized to the non-speech. The proposed method is minimized the noise affect, and speech recognition model are convergence of speech signal feature extraction to each speech frame and the silence feature normalization. Also, this method create the original speech signal with energy spectrum similar to entropy, therefore speech noise effects are to receive less of the noise. the performance values are improved in signal to noise ration by the silence feature normalization. We fixed speech and non speech classification standard value in cepstrum For th Performance analysis of the method presented in this paper is showed by comparing the results with CHMM HMM, the recognition rate was improved 2.7%p in the speech dependent and advanced 0.7%p in the speech independent.

Compromised feature normalization method for deep neural network based speech recognition (심층신경망 기반의 음성인식을 위한 절충된 특징 정규화 방식)

  • Kim, Min Sik;Kim, Hyung Soon
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.65-71
    • /
    • 2020
  • Feature normalization is a method to reduce the effect of environmental mismatch between the training and test conditions through the normalization of statistical characteristics of acoustic feature parameters. It demonstrates excellent performance improvement in the traditional Gaussian mixture model-hidden Markov model (GMM-HMM)-based speech recognition system. However, in a deep neural network (DNN)-based speech recognition system, minimizing the effects of environmental mismatch does not necessarily lead to the best performance improvement. In this paper, we attribute the cause of this phenomenon to information loss due to excessive feature normalization. We investigate whether there is a feature normalization method that maximizes the speech recognition performance by properly reducing the impact of environmental mismatch, while preserving useful information for training acoustic models. To this end, we introduce the mean and exponentiated variance normalization (MEVN), which is a compromise between the mean normalization (MN) and the mean and variance normalization (MVN), and compare the performance of DNN-based speech recognition system in noisy and reverberant environments according to the degree of variance normalization. Experimental results reveal that a slight performance improvement is obtained with the MEVN over the MN and the MVN, depending on the degree of variance normalization.

On Wavelet Transform Based Feature Extraction for Speech Recognition Application

  • Kim, Jae-Gil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.2E
    • /
    • pp.31-37
    • /
    • 1998
  • This paper proposes a feature extraction method using wavelet transform for speech recognition. Speech recognition system generally carries out the recognition task based on speech features which are usually obtained via time-frequency representations such as Short-Time Fourier Transform (STFT) and Linear Predictive Coding(LPC). In some respects these methods may not be suitable for representing highly complex speech characteristics. They map the speech features with same may not frequency resolutions at all frequencies. Wavelet transform overcomes some of these limitations. Wavelet transform captures signal with fine time resolutions at high frequencies and fine frequency resolutions at low frequencies, which may present a significant advantage when analyzing highly localized speech events. Based on this motivation, this paper investigates the effectiveness of wavelet transform for feature extraction of wavelet transform for feature extraction focused on enhancing speech recognition. The proposed method is implemented using Sampled Continuous Wavelet Transform (SCWT) and its performance is tested on a speaker-independent isolated word recognizer that discerns 50 Korean words. In particular, the effect of mother wavelet employed and number of voices per octave on the performance of proposed method is investigated. Also the influence on the size of mother wavelet on the performance of proposed method is discussed. Throughout the experiments, the performance of proposed method is discussed. Throughout the experiments, the performance of proposed method is compared with the most prevalent conventional method, MFCC (Mel0frequency Cepstral Coefficient). The experiments show that the recognition performance of the proposed method is better than that of MFCC. But the improvement is marginal while, due to the dimensionality increase, the computational loads of proposed method is substantially greater than that of MFCC.

  • PDF

PESQ-Based Selection of Efficient Partial Encryption Set for Compressed Speech

  • Yang, Hae-Yong;Lee, Kyung-Hoon;Lee, Sang-Han;Ko, Sung-Jea
    • ETRI Journal
    • /
    • v.31 no.4
    • /
    • pp.408-418
    • /
    • 2009
  • Adopting an encryption function in voice over Wi-Fi service incurs problems such as additional power consumption and degradation of communication quality. To overcome these problems, a partial encryption (PE) algorithm for compressed speech was recently introduced. However, from the security point of view, the partial encryption sets (PESs) of the conventional PE algorithm still have much room for improvement. This paper proposes a new selection method for finding a smaller PES while maintaining the security level of encrypted speech. The proposed PES selection method employs the perceptual evaluation of the speech quality (PESQ) algorithm to objectively measure the distortion of speech. The proposed method is applied to the ITU-T G.729 speech codec, and content protection capability is verified by a range of tests and a reconstruction attack. The experimental results show that encrypting only 20% of the compressed bitstream is sufficient to effectively hide the entire content of speech.

Speech Outcomes of Submucous Cleft Palate Children With Double Opposing Z-Plasty Operation (Double Opposing Z-Plasty 수술 후의 점막하 구개열 아동의 말소리 개선에 관한 연구)

  • 최홍식;홍진희;김정홍;최성희;최재남;남지인
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.13 no.2
    • /
    • pp.180-187
    • /
    • 2002
  • Background and Objectives : The operation Double Opposing Z-Plasty, has been used for improving VPI function in the submucous cleft palate. However, few reports on the effects of the speech change were presented. The purpose of this study is to compare the difference of nasality and nasalance, parents satisfaction between before and after this operation and to consider how much improvement in speech. Materials and Methods : Ten submucous cleft palate children who underwent double opposing Z-plasty were analyzed. We retrospectively studied nasalance, auditory perception (nasality) with hypernasality, patients satisfaction, speech evaluation by using charts review, video tape, telephone interview. Results : In 8 patients of 10 submucous cleft palate, hypernasality reduced and speech intelligibility was higher and mean 0.35 point was increased in the velum length after operation. After operation, nasality was improved (2.0 point) and level of nasal emission decreased. Regarding satisfaction of this operation, scale was mean 2.8 (5 point-scale) : 8 parents were satisfied in the resonance, 3 parents were satisfied articulation. The reason of dissatisfaction was mostly compensatory articulation. Conclusion : To improve of speech in the submucous cleft palate, speech therapy afterthis operation as well as successful surgery should be considered.

  • PDF