• 제목/요약/키워드: Speech signals

Search Result 499, Processing Time 0.027 seconds

On the Perceptually Important Phase Information in Acoustic Signal (인지에 중요한 음향신호의 위상에 대해)

    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.7
    • /
    • pp.28-33
    • /
    • 2000
  • For efficient quantization of speech representation, it is common to incorporate Perceptual characteristics of human hearing. However, the focus has been confined only to the magnitude information of speech, and little attention has been paid to phase information. This paper presents a novel approach, termed perceptually irrelevant phase elimination (PIPE), to find out irrelevant phase information of acoustic signals in terms of perception. The proposed method, which is based on the observation that the relative phase relationship within a critical band is perceptually important, is derived not only for stationary Fourier signal but also for harmonic signal. The proposed method is incorporated into the analysis/synthesis system based on harmonic representation of speech, and subjective test results demonstrate the effectiveness of proposed method.

  • PDF

CELP speech coder by the structure of multi-codebook (다중 코드북 구조를 이용한 CELP형 음성부호화기)

  • 박규정;한승조
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.1
    • /
    • pp.23-33
    • /
    • 2001
  • In this paper we propose a multi-codebook structure which can synthesize high quality speech without increasing of CELP coder's computation. We also design a 4.8kbps CELP speech coder with the proposed codebook structure. The proposed multi-codebook structure is made up of basic codebook and the other codebook which Is formed for strengthen spectrum an4 pitch. Multi-codebook structure can represent accurate gains since it represents excitation signals as summation of two kinds of codebooks and uses different codebook gains respectively. Therefore it can provide better speech quality than other conventional structures. In computer simulation of the 4.8kpbs CELP coder designed with the proposed codebook structure its segSNR was 0.81dB more high than the DoD CELP coder of same transmission rates.

  • PDF

Enhanced Spectral Hole Substitution for Improving Speech Quality in Low Bit-Rate Audio Coding

  • Lee, Chang-Heon;Kang, Hong-Goo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.3E
    • /
    • pp.131-139
    • /
    • 2010
  • This paper proposes a novel spectral hole substitution technique for low bit-rate audio coding. The spectral holes frequently occurring in relatively weak energy bands due to zero bit quantization result in severe quality degradation, especially for harmonic signals such as speech vowels. The enhanced aacPlus (EAAC) audio codec artificially adjusts the minimum signal-to-mask ratio (SMR) to reduce the number of spectral holes, but it still produces noisy sound. The proposed method selectively predicts the spectral shapes of hole bands using either intra-band correlation, i.e. harmonically related coefficients nearby or inter-band correlation, i.e. previous frames. For the bands that have low prediction gain, only the energy term is quantized and spectral shapes are replaced by pseudo random values in the decoding stage. To minimize perceptual distortion caused by spectral mismatching, the criterion of the just noticeable level difference (JNLD) and spectral similarity between original and predicted shapes are adopted for quantizing the energy term. Simulation results show that the proposed method implemented into the EAAC baseline coder significantly improves speech quality at low bit-rates while keeping equivalent quality for mixed and music contents.

Speech/Mixed Content Signal Classification Based on GMM Using MFCC (MFCC를 이용한 GMM 기반의 음성/혼합 신호 분류)

  • Kim, Ji-Eun;Lee, In-Sung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.2
    • /
    • pp.185-192
    • /
    • 2013
  • In this paper, proposed to improve the performance of speech and mixed content signal classification using MFCC based on GMM probability model used for the MPEG USAC(Unified Speech and Audio Coding) standard. For effective pattern recognition, the Gaussian mixture model (GMM) probability model is used. For the optimal GMM parameter extraction, we use the expectation maximization (EM) algorithm. The proposed classification algorithm is divided into two significant parts. The first one extracts the optimal parameters for the GMM. The second distinguishes between speech and mixed content signals using MFCC feature parameters. The performance of the proposed classification algorithm shows better results compared to the conventionally implemented USAC scheme.

Design of Wideband Speech Coder Compatible with CS-ACELP (CS-ACELP와 호환성을 갖는 광대역 음성 부호화기 설계)

  • 김동주;이인성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.4
    • /
    • pp.52-57
    • /
    • 2000
  • In this paper, we designed the 16 Kbps speech coder that has compatibility with CS-ACELP algorithm(G.729). The speech signal is sampled at rate of 16 KHz, divided into two narrowband signal by QMF filterbank, and decimated to rate of 8 KHz. The lower-band signal is encoded by CS-ACELP and the upper-band signal is encoded by Adaptive Transform Coding(ATC) algorithm. At the receiver, two band signals are synthesized by decoder of CS-ACELP and ATC, respectively. The reconstructed output is obtained by passing the QMF synthesis bank. The proposed wideband coder is evaluated with ITU-T G.722 coder through the Mean Opinion Score(MOS) test.

  • PDF

The Invention of Reis Telephone and Its Problem of Speech Quality (라이스의 전화기 발명과 통화 음질의 문제)

  • Ku, Ja-Hyon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.6
    • /
    • pp.395-401
    • /
    • 2010
  • Since Philipp Reis succeeded in sending human voices through electric wires well ahead of Elisha Gray and A. G. Bell etc., he deserves to be acknowledged as the inventor of the telephone. Nevertheless, he did not enjoy any honor for his great invention while he was alive. Since he was working in a scientific community, his work was presented not as a patentable invention but as a scientific discovery. In addition, he used the intermittent electricity in accordance with the experimental tradition in European acoustics, occasioning the speech quality of his telephone to have a fatal shortcoming. On the contrary, Bell, who was a novice in electricity and acoustics, employed variable currents to transmit the sound signals, which guaranteed better speech qualities than Reis's.

Preprocessing method for enhancing digital audio quality in speech communication system (음성통신망에서 디지털 오디오 신호 음질개선을 위한 전처리방법)

  • Song Geun-Bae;Ahn Chul-Yong;Kim Jae-Bum;Park Ho-Chong;Kim Austin
    • Journal of Broadcast Engineering
    • /
    • v.11 no.2 s.31
    • /
    • pp.200-206
    • /
    • 2006
  • This paper presents a preprocessing method to modify the input audio signals of a speech coder to obtain the finally enhanced signals at the decoder. For the purpose, we introduce the noise suppression (NS) scheme and the adaptive gain control (AGC) where an audio input and its coding error are considered as a noisy signal and a noise, respectively. The coding error is suppressed from the input and then the suppressed input is level aligned to the original input by the following AGC operation. Consequently, this preprocessing method makes the spectral energy of the music input redistributed all over the spectral domain so that the preprocessed music can be coded more effectively by the following coder. As an artifact, this procedure needs an additional encoding pass to calculate the coding error. However, it provides a generalized formulation applicable to a lot of existing speech coders. By preference listening tests, it was indicated that the proposed approach produces significant enhancements in the perceived music qualities.

Word Recognition Using VQ and Fuzzy Theory (VQ와 Fuzzy 이론을 이용한 단어인식)

  • Kim, Ja-Ryong;Choi, Kap-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.10 no.4
    • /
    • pp.38-47
    • /
    • 1991
  • The frequency variation among speakers is one of problems in the speech recognition. This paper applies fuzzy theory to solve the variation problem of frequency features. Reference patterns are expressed by fuzzified patterns which are produced by the peak frequency and the peak energy extracted from codebooks which are generated from training words uttered by several speakers, as they should include common features of speech signals. Words are recognized by fuzzy inference which uses the certainty factor between the reference patterns and the test fuzzified patterns which are produced by the peak frequency and the peak energy extracted from the power spectrum of input speech signals. Practically, in computing the certainty factor, to reduce memory capacity and computation requirements we propose a new equation which calculates the improved certainty factor using only the difference between two fuzzy values. As a result of experiments to test this word recognition method by fuzzy interence with Korean digits, it is shown that this word recognition method using the new equation presented in this paper, can solve the variation problem of frequency features and that the memory capacity and computation requirements are reduced.

  • PDF

Low Rate Speech Coding Using the Harmonic Coding Combined with CELP Coding (하모닉 코딩과 CELP방법을 이용한 저 전송률 음성 부호화 방법)

  • 김종학;이인성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.3
    • /
    • pp.26-34
    • /
    • 2000
  • In this paper, we propose a 4kbps speech coder that combines the harmonic vector excitation coding with time-separated transition coding. The harmonic vector excitation coding uses the harmonic excitation coding in the voiced frame and uses the vector excitation coding with the structure of analysis-by-synthesis in the unvoiced frame, respectively. But two mode coding method is not effective for transition frame mixed in voiced and unvoiced signal and a new method beyond using unvoiced/voiced mode coding is needed. Thus, we designed a time-separated transition coding method for transition frame in which a voiced/unvoiced decision algorithm separates unvoiced and voiced duration in a frame, and harmonic-harmonic excitation coding and vector-harmonic excitation coding method is selectively used depending on the previous frame U/V decision. In the decoder, the voiced excitation signals are generated efficiently through the inverse FFT of harmonic magnitudes and the unvoiced excitation signals are made by the inverse vector quantization. The reconstructed speech signal are synthesized by the Overlap/Add method.

  • PDF

A STUDY ON THE IMPLEMENTATION OF ARTIFICIAL NEURAL NET MODELS WITH FEATURE SET INPUT FOR RECOGNITION OF KOREAN PLOSIVE CONSONANTS (한국어 파열음 인식을 위한 피쳐 셉 입력 인공 신경망 모델에 관한 연구)

  • Kim, Ki-Seok;Kim, In-Bum;Hwang, Hee-Yeung
    • Proceedings of the KIEE Conference
    • /
    • 1990.07a
    • /
    • pp.535-538
    • /
    • 1990
  • The main problem in speech recognition is the enormous variability in acoustic signals due to complex but predictable contextual effects. Especially in plosive consonants it is very difficult to find invariant cue due to various contextual effects, but humans use these contextual effects as helpful information in plosive consonant recognition. In this paper we experimented on three artificial neural net models for the recognition of plosive consonants. Neural Net Model I used "Multi-layer Perceptron ". Model II used a variation of the "Self-organizing Feature Map Model". And Model III used "Interactive and Competitive Model" to experiment contextual effects. The recognition experiment was performed on 9 Korean plosive consonants. We used VCV speech chains for the experiment on contextual effects. The speech chain consists of Korean plosive consonants /g, d, b, K, T, P, k, t, p/ (/ㄱ, ㄷ, ㅂ, ㄲ, ㄸ, ㅃ, ㅋ, ㅌ, ㅍ/) and eight Korean monothongs. The inputs to Neural Net Models were several temporal cues - duration of the silence, transition and vot -, and the extent of the VC formant transitions to the presence of voicing energy during closure, burst intensity, presence of asperation, amount of low frequency energy present at voicing onset, and CV formant transition extent from the acoustic signals. Model I showed about 55 - 67 %, Model II showed about 60%, and Model III showed about 67% recognition rate.

  • PDF