• Title/Summary/Keyword: Speech signals

Search Result 497, Processing Time 0.023 seconds

A Reliable Pitch Determination Algorithm (PDA) Based on Dyadic Wavelet Transform (DyWT)

  • Kim, Nam-Hoon;Kang, Yong-Sung;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.3-10
    • /
    • 2000
  • This paper presents a time-based Pitch Determination Algorithm (PDA) for the reliable estimation of Pitch Period (PP) in speech signals. Based on the Dyadic Wavelet Transform (DyWT) , the proposed PDA detects the presence of Glottal Closure Instants (GCI) and uses the information to determine the pitch period. We also examine the problem of conventional PDAs based on DyWT; their performance is compared with the proposition of this paper. The effectiveness of the proposed method is tested with real speech signals containing a transition between the voiced and the unvoiced interval where the energy of the voiced signal is unsteady. The result shows that the proposed method provides good performance in estimating both the unsteady GCI positions as well as the steady parts.

  • PDF

Speech Enhancement Based on Psychoacoustic Model

  • Lee, Jingeol;Kim, Soowon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.3E
    • /
    • pp.12-18
    • /
    • 2000
  • Psychoacoustic model based methods have recently been introduced in order to enhance speech signals corrupted by ambient noise. In particular, the perceptual filter is analytically derived where the frequency content of the input noisy signal is made the same as that of the estimated clean signal in auditory domain. However, the analytical derivation should rely on the deconvolution associated with the spreading function in the psychoacoustic model, which results in an ill-conditioned problem. In order to cope with the problem associated with the deconvolution, we propose a novel psychoacoustic model based speech enhancement filter whose principle is the same as the perceptual filter, however the filter is derived by a constrained optimization which provides solutions to the ill-conditioned problem. It is demonstrated with artificially generated signals that the proposed filter operates according to the principle. It is shown that superior performance results from the proposed filter over the perceptual filter provided that a clean speech signal is separable from noise.

  • PDF

A Study on the Performance of Companding Algorithms for Digital Hearing Aid Users (디지털 보청기 사용자를 위한 압신 알고리즘의 성능 연구)

  • Hwang, Y.S.;Han, J.H.;Ji, Y.S.;Hong, S.H.;Lee, S.M.;Kim, D.W.;Kim, In-Young;Kim, Sun-I.
    • Journal of Biomedical Engineering Research
    • /
    • v.32 no.3
    • /
    • pp.218-229
    • /
    • 2011
  • Companding algorithms have been used to enhance speech recognition in noise for cochlea implant users. The efficiency of using companding for digital hearing aid users is not yet validated. The purpose of this study is to evaluate the performance of the companding for digital hearing aid users in the various hearing loss cases. Using HeLPS, a hearing loss simulator, two different sensorinerual hearing loss conditions were simulated; mild gently sloping hearing loss(HL1) and moderate to steeply sloping hearing loss(HL2). In addition, a non-linear compression was simulated to compensate for hearing loss using national acoustic laboratories-non-linear version 1(NAL-NL1) in HeLPS. In companding, the following four different companding strategies were used changing Q values(q1, q2) of pre-filter(F filter) and post filter(G filter). Firstly, five IEEE sentences which were presented with speech-shaped noise at different SNRs(0, 5, 10, 15 dB) were processed by the companding. Secondly, the processed signals were applied to HeLPS. For comparison, signals which were not processed by companding were also applied to HeLPS. For the processed signals, log-likelihood ratio(LLR) and cepstral distance(CEP) were measured for evaluation of speech quality. Also, fourteen normal hearing listeners performed speech reception threshold(SRT) test for evaluation of speech intelligibility. As a result of this study, the processed signals with the companding and NAL-NL1 have performed better than that with only NAL-NL1 in the sensorineural hearing loss conditions. Moreover, the higher ratio of Q values showed better scores in LLR and CEP. In the SRT test, the processed signals with companding(SRT = -13.33 dB SPL) showed significantly better speech perception in noise than those processed using only NAL-NL1(SRT = -11.56 dB SPL).

On a Detection for the Fundamental Frequency of Speech Signals (음성신호의기본주파수 검출)

  • 배명진
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06c
    • /
    • pp.42-47
    • /
    • 1994
  • A pitch detector is an essential component in a variety of speech processing systems. Besides providing valuable insights into the nature of the exciation source for speech production, the pitch contour of an utterance is useful for recognizing speakers, aids-to-the handicapped, and is required in almost all speech analysis-synthesis system. Because of the importance of the pitch detection, a wide variety algorithms for pitch detection have been proposed in speech procesing literature. Thus, in this paper we discuss th evarious type of pitch detection algorithms which have been proposed until now. Then we provide th eperformance measurements for seven pitch detection algorithms.

  • PDF

Study of Emotion in Speech (감정변화에 따른 음성정보 분석에 관한 연구)

  • 장인창;박미경;김태수;박면웅
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.1123-1126
    • /
    • 2004
  • Recognizing emotion in speech is required lots of spoken language corpus not only at the different emotional statues, but also in individual languages. In this paper, we focused on the changes speech signals in different emotions. We compared the features of speech information like formant and pitch according to the 4 emotions (normal, happiness, sadness, anger). In Korean, pitch data on monophthongs changed in each emotion. Therefore we suggested the suitable analysis techniques using these features to recognize emotions in Korean.

  • PDF

A study on the Visible Speech Processing System for the Hearing Impaired (청각 장애자를 위한 시각 음성 처리 시스템에 관한 연구)

  • 김원기;김남현
    • Journal of Biomedical Engineering Research
    • /
    • v.11 no.1
    • /
    • pp.75-82
    • /
    • 1990
  • The purpose of this study is to help the hearing Impaired's speech training with a visible speech processing system. In brief, this system converts the features of speech signals into graphics on monitor, and adjusts the features of hearing impaired to normal ones. There are formant and pitch in the features used for this system. They are extracted using the digital signal processing such as linear predictive method or AMDF(Average Magnitude Difference Function). In order to effectively train for the hearing impaired's abnormal speech, easilly visible feature has been being studied.

  • PDF

Microphone Array Based Speech Enhancement Using Independent Vector Analysis (마이크로폰 배열에서 독립벡터분석 기법을 이용한 잡음음성의 음질 개선)

  • Wang, Xingyang;Quan, Xingri;Bae, Keunsung
    • Phonetics and Speech Sciences
    • /
    • v.4 no.4
    • /
    • pp.87-92
    • /
    • 2012
  • Speech enhancement aims to improve speech quality by removing background noise from noisy speech. Independent vector analysis is a type of frequency-domain independent component analysis method that is known to be free from the frequency bin permutation problem in the process of blind source separation from multi-channel inputs. This paper proposed a new method of microphone array based speech enhancement that combines independent vector analysis and beamforming techniques. Independent vector analysis is used to separate speech and noise components from multi-channel noisy speech, and delay-sum beamforming is used to determine the enhanced speech among the separated signals. To verify the effectiveness of the proposed method, experiments for computer simulated multi-channel noisy speech with various signal-to-noise ratios were carried out, and both PESQ and output signal-to-noise ratio were obtained as objective speech quality measures. Experimental results have shown that the proposed method is superior to the conventional microphone array based noise removal approach like GSC beamforming in the speech enhancement.

A Scalable Audio Coder for High-quality Speech and Audio Services

  • Lee, Gil-Ho;Lee, Young-Han;Kim, Hong-Kook;Kim, Do-Young;Lee, Mi-Suk
    • MALSORI
    • /
    • no.61
    • /
    • pp.75-86
    • /
    • 2007
  • In this paper, we propose a scalable audio coder, which has a variable bandwidth from the narrowband speech bandwidth to the audio bandwidth and also has a bit-rate from 8 to 320 kbits/s, in order to cope with the quality of service(QoS) according to the network load. First of all, the proposed scalable coder splits bandwidth of the input audio into narrowband up to around 4 kHz and above. Next, the narrowband signals are compressed by a speech coding method compatible to an existing standard speech coder such as G.729, and the other signals whose bandwidth is above the narrowband are compressed on the basis of a psychoacoustic model. It is shown from the objective quality tests using the signal-to-noise ratio(SNR) and the perceptual evaluation of audio quality(PEAQ) that the proposed scalable audio coder provides a comparable quality to the MPEG-1 Layer III (MP3) audio coder.

  • PDF

Emotion Recognition in Arabic Speech from Saudi Dialect Corpus Using Machine Learning and Deep Learning Algorithms

  • Hanaa Alamri;Hanan S. Alshanbari
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.9-16
    • /
    • 2023
  • Speech can actively elicit feelings and attitudes by using words. It is important for researchers to identify the emotional content contained in speech signals as well as the sort of emotion that resulted from the speech that was made. In this study, we studied the emotion recognition system using a database in Arabic, especially in the Saudi dialect, the database is from a YouTube channel called Telfaz11, The four emotions that were examined were anger, happiness, sadness, and neutral. In our experiments, we extracted features from audio signals, such as Mel Frequency Cepstral Coefficient (MFCC) and Zero-Crossing Rate (ZCR), then we classified emotions using many classification algorithms such as machine learning algorithms (Support Vector Machine (SVM) and K-Nearest Neighbor (KNN)) and deep learning algorithms such as (Convolution Neural Network (CNN) and Long Short-Term Memory (LSTM)). Our Experiments showed that the MFCC feature extraction method and CNN model obtained the best accuracy result with 95%, proving the effectiveness of this classification system in recognizing Arabic spoken emotions.

Spectrum Based Excitation Extraction for HMM Based Speech Synthesis System (스펙트럼 기반 여기신호 추출을 통한 HMM기반 음성합성기의 음질 개선 방법)

  • Lee, Bong-Jin;Kim, Seong-Woo;Baek, Soon-Ho;Kim, Jong-Jin;Kang, Hong-Goo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.1
    • /
    • pp.82-90
    • /
    • 2010
  • This paper proposes an efficient method to enhance the quality of synthesized speech in HMM based speech synthesis system. The proposed method trains spectral parameters and excitation signals using Gaussian mixture model, and estimates appropriate excitation signals from spectral parameters during the synthesis stage. Both WB-PESQ and MUSHRA results show that the proposed method provides better speech quality than conventional HMM based speech synthesis system.