• Title/Summary/Keyword: Speech signals

Search Result 498, Processing Time 0.024 seconds

A Feedback and Noise Cancellation Algorithm of Hearing Aids Using Dual Microphones (이중 마이크를 사용한 보청기의 궤환 및 잡음제거 알고리즘)

  • Lee, Haeng-Woo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.7C
    • /
    • pp.413-420
    • /
    • 2011
  • This paper proposes a new adaptive algorithm to cancel the acoustic feedback and noise signals in the binaural hearing aids. The convergence performances of the proposed algorithm are improved by updating coefficients of the feedback canceller after the speech signal is cancelled from the residual signal with dual microphones. The feedback canceller firstly cancels the feedback signal from the microphone signal, and then the noise canceller reduces the noise by the beamforming method. To assure that binaural hearing aids converge stably, the left-sided hearing aid only is converged firstly, next the right-sided hearing aid only is converged. To verify performances of the proposed algorithm, simulations were carried out for a speech. As the results of simulations, it was proved that we can advance 14.43dB SFR(Signal to Feedback Ratio) on the average for the feedback canceller, 10.19dB SNR(Signal to Noise Ratio) improvement on the average for the noise canceller, in case that this algorithm is used.

Performance Improvement of Double Talk Detection before Convergence of the Echo Canceller by Using Linear Predictive Coding Filter Gain of the Primary Input Signal (주입력신호의 LPC 필터 이득을 이용한 반향제거기의 수렴전 동시통화검출 성능 개선)

  • Yoo, Jae-Ha
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.6
    • /
    • pp.628-633
    • /
    • 2014
  • This paper proposes a performance improvement method of the conventional double talk detection method which can operate before convergence of the echo canceller. The proposed method estimates the coefficients of the linear predictive coding(LPC) filter by using the primary input signal. The time-varying threshold for double talk detection is determined based on the LPC filter gain of the primary input signal level. The proposed method can reduce not only false detection rate which means wrong detection of single talk as double talk but also double talk detection delay. Computer simulation was performed using a long-term real speech signals. It is shown that the proposed method improves the conventional method in terms of lowering the false detection rate and shortening the detection delay.

Electroglottographic Measurements of Glottal Function in Voice according to Gender and Age

  • Ko, Do-Heung
    • Phonetics and Speech Sciences
    • /
    • v.3 no.1
    • /
    • pp.97-102
    • /
    • 2011
  • Electroglottography (EGG) is a common method for providing non-invasive measurements of glottal activity. EGG has been used in vocal pathology as a clinical or research tool to measure vocal fold contact. This paper presents the results of pitch, jitter, and closed quotient (CQ) measurements in electroglottographic signals of young (mean = 22.7 years) and elderly (mean = 74.3 years) male and female subjects. The sustained corner vowels /i/, /a/, and /u/ were measured at around 70 dB SPL since the most notable among EGG variables is the phonation intensity, which showed positive correlation with closed phase. The aim of this paper was to measure EGG data according to age and gender. In CQ, there was a significant difference between young and elderly female subjects while there was no significant difference between young and elderly male subjects. The mean value for young males was higher than that for elderly males while the mean value for young females was lower than that for elderly females. Thus, it can be said that in mean values, increased CQ was related to decreased age for females, while CQ decreased for males as the speaker's age decreased. Although the laryngeal degeneration due to increased age seems to occur to a lesser extent in females, the significant increase of CQ in elderly female voices could not be explained in terms of age-related physiological changes. In standard deviation of pitch and jitter, the mean values for young and elderly males were higher than that for young and elderly females. That is, male subjects showed higher in mean values of voice variables than female subjects. This result could be considered as a sign of vocal instability in males. It was suggested that these results may provide powerful insights into the control and regulation of normal phonation and into the detection and characterization of pathology.

  • PDF

Adaptive Noise Reduction using Standard Deviation of Wavelet Coefficients in Speech Signal (웨이브렛 계수의 표준편차를 이용한 음성신호의 적응 잡음 제거)

  • 황향자;정광일;이상태;김종교
    • Science of Emotion and Sensibility
    • /
    • v.7 no.2
    • /
    • pp.141-148
    • /
    • 2004
  • This paper proposed a new time adapted threshold using the standard deviations of Wavelet coefficients after Wavelet transform by frame scale. The time adapted threshold is set up using the sum of standard deviations of Wavelet coefficient in cA3 and weighted cDl. cA3 coefficients represent the voiced sound with low frequency and cDl coefficients represent the unvoiced sound with high frequency. From simulation results, it is demonstrated that the proposed algorithm improves SNR and MSE performance more than Wavelet transform and Wavelet packet transform does. Moreover, the reconstructed signals by the proposed algorithm resemble the original signal in terms of plosive sound, fricative sound and affricate sound but Wavelet transform and Wavelet packet transform reduce those sounds seriously.

  • PDF

Effective Feature Vector for Isolated-Word Recognizer using Vocal Cord Signal (성대신호 기반의 명령어인식기를 위한 특징벡터 연구)

  • Jung, Young-Giu;Han, Mun-Sung;Lee, Sang-Jo
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.3
    • /
    • pp.226-234
    • /
    • 2007
  • In this paper, we develop a speech recognition system using a throat microphone. The use of this kind of microphone minimizes the impact of environmental noise. However, because of the absence of high frequencies and the partially loss of formant frequencies, previous systems developed with those devices have shown a lower recognition rate than systems which use standard microphone signals. This problem has led to researchers using throat microphone signals as supplementary data sources supporting standard microphone signals. In this paper, we present a high performance ASR system which we developed using only a throat microphone by taking advantage of Korean Phonological Feature Theory and a detailed throat signal analysis. Analyzing the spectrum and the result of FFT of the throat microphone signal, we find that the conventional MFCC feature vector that uses a critical pass filter does not characterize the throat microphone signals well. We also describe the conditions of the feature extraction algorithm which make it best suited for throat microphone signal analysis. The conditions involve (1) a sensitive band-pass filter and (2) use of feature vector which is suitable for voice/non-voice classification. We experimentally show that the ZCPA algorithm designed to meet these conditions improves the recognizer's performance by approximately 16%. And we find that an additional noise-canceling algorithm such as RAST A results in 2% more performance improvement.

Time-Synchronization Method for Dubbing Signal Using SOLA (SOLA를 이용한 더빙 신호의 시간축 동기화)

  • 이기승;지철근;차일환;윤대희
    • Journal of Broadcast Engineering
    • /
    • v.1 no.2
    • /
    • pp.85-95
    • /
    • 1996
  • The purpose of this paper Is to propose a dubbed signal time-synchroniztion technique based on the SOLA(Synchronized Over-Lap and Add) method which has been widely used to modify the time scale of speech signal. In broadcasting audio recording environments, the high degree of background noise requires dubbing process. Since the time difference between the original and the dubbed signal ranges about 200mili seconds, process is required to make the dubbed signal synchronize to the corresponding image. The proposed method finds he starting point of the dubbing signal using the short-time energy of the two signals. Thereafter, LPC cepstrum analysis and DTW(Dynamic Time Warping) process are applied to synchronize phoneme positions of the two signals. After determining the matched point by the minimum mean square error between orignal and dubbed LPC cepstrums, the SOLA method is applied to the dubbed signal, to maintain the consistency of the corresponding phase. Effectiveness of proposed method is verified by comparing the waveforms and the spectrograms of the original and the time synchronized dubbing signal.

  • PDF

A Preprocessing Approach to Improving the Quality of the Music Produced by the EVRC (EVRC 코덱으로 재생하는 음악의 품질을 개선하기 위한 전처리 기법)

  • 남영한;하태균;전윤호;김재수;박섭형
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.5C
    • /
    • pp.476-485
    • /
    • 2003
  • This paper proposers a preprocessing approach to improving the quality of the music produced by the EVRC(enhanced variable rate codec) which is one of the CDMA(Code Division Multiple Access) voice codecs. Since the EVRC is optimized only for speech signals, it can deteriorate the quality of the music passed through it. One of the problems with the EVRC-coded music is time-clipping, which usually occurs when subsequent frames are encoded at Rate l/8. Since the EVRC determines the bit rate for an input frame based on the long-term prediction gain, we increase the long-term prediction gain in order for the most of the frames to be encoded at Rate 1 or Rate 1/2. Experimental results show that the approach works well on music signals and the number of time-clipped frames is considerably reduced.

Optimum Pattern Synthesis for a Microphone Array (마이크로폰 어레이를 위한 최적 패턴 형성)

  • Chang, Byoung-Kun;Kwon, Tae-Neung;Byun, Youn-Shik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.1
    • /
    • pp.47-53
    • /
    • 1997
  • This paper concerns an efficient approach to forming a beam pattern of a microphone array to deal with broadband signals such as speech in a teleconference. A numerical method is proposed to find updated location of sidelobes for equalizaing the sidelobes via perturbation of array parameters such as array weight or microphone spacing. Thus the microphone array is optimized in a Dolph-Chebyshev sense such that directional or background noises incident in an array visual range are eliminated efficiently. It is shown that perturbation of microphone spacing yields an optimum pattern more appropriate for dealing with broadband signals than that of array weight. Also, a novel method is proposed to find a beam pattern which is robust with respect to sidelobe in a scanning situation. Computer simulation results are presented.

  • PDF

The Design of Temporal Bone Type Implantable Microphone for Reduction of the Vibrational Noise due to Masticatory Movement (저작운동으로 인한 진동 잡음 신호의 경감을 위한 측두골 이식형 마이크로폰의 설계)

  • Woo, Seong-Tak;Jung, Eui-Sung;Lim, Hyung-Gyu;Lee, Yun-Jung;Seong, Ki-Woong;Lee, Jyung-Hyun;Cho, Jin-Ho
    • Journal of Sensor Science and Technology
    • /
    • v.21 no.2
    • /
    • pp.144-150
    • /
    • 2012
  • A microphone for fully implantable hearing device was generally implanted under the skin of the temporal bone. So, the implanted microphone's characteristics can be affected by the accompanying noise due to masticatory movement. In this paper, the implantable microphone with 2-channels structure was designed for reduction of the generated noise signal by masticatory movement. And an experimental model for generation of the noise by masticatory movement was developed with considering the characteristics of human temporal bone and skin. Using the model, the speech signal by a speaker and the artificial noise by a vibrator were supplied simultaneously into the experimental model, the electrical signals were measured at the proposed microphone. The collected signals were processed using a general adaptive filter with least mean square(LMS) algorithm. To confirm performance of the proposed methods, the correlation coefficient and the signal to noise ratio(SNR) before and after the signal processing were calculated. Finally, the results were compared each other.

Context Recognition Using Environmental Sound for Client Monitoring System (피보호자 모니터링 시스템을 위한 환경음 기반 상황 인식)

  • Ji, Seung-Eun;Jo, Jun-Yeong;Lee, Chung-Keun;Oh, Siwon;Kim, Wooil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.2
    • /
    • pp.343-350
    • /
    • 2015
  • This paper presents a context recognition method using environmental sound signals, which is applied to a mobile-based client monitoring system. Seven acoustic contexts are defined and the corresponding environmental sound signals are obtained for the experiments. To evaluate the performance of the context recognition, MFCC and LPCC method are employed as feature extraction, and statistical pattern recognition method are used employing GMM and HMM as acoustic models, The experimental results show that LPCC and HMM are more effective at improving context recognition accuracy compared to MFCC and GMM respectively. The recognition system using LPCC and HMM obtains 96.03% in recognition accuracy. These results demonstrate that LPCC is effective to represent environmental sounds which contain more various frequency components compared to human speech. They also prove that HMM is more effective to model the time-varying environmental sounds compared to GMM.