• Title/Summary/Keyword: VOCAL SIGNAL

Search Result 85, Processing Time 0.018 seconds

Vocal Enhancement for Improving the Performance of Vocal Pitch Detection (보컬 피치 검출의 성능 향상을 위한 보컬 강화 기술)

  • Lee, Se-Won;Song, Chai-Jong;Lee, Seok-Pil;Park, Ho-Chong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.6
    • /
    • pp.353-359
    • /
    • 2011
  • This paper proposes a vocal enhancement technique for improving the performance of vocal pitch detection in polyphonic music signal. The proposed vocal enhancement technique predicts an accompaniment signal from the input signal and generates an accompaniment replica signal according to the vocal power. Then, it removes the accompaniment replica signal from the input signal, resulting in a vocal-enhanced signal. The performance of the proposed method was measured by applying the same vocal pitch extraction method to the original and the vocal-enhanced signal, and the vocal pitch detection accuracy was increased by 7.1 % point in average.

RECOGNITION SYSTEM USING VOCAL-CORD SIGNAL (성대 신호를 이용한 인식 시스템)

  • Cho, Kwan-Hyun;Han, Mun-Sung;Park, Jun-Seok;Jeong, Young-Gyu
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.216-218
    • /
    • 2005
  • This paper present a new approach to a noise robust recognizer for WPS interface. In noisy environments, performance of speech recognition is decreased rapidly. To solve this problem, We propose the recognition system using vocal-cord signal instead of speech. Vocal-cord signal has low quality but it is more robust to environment noise than speech signal. As a result, we obtained 75.21% accuracy using MFCC with CMS and 83.72% accuracy using ZCPA with RASTA.

  • PDF

Acoustic Measures from Normal and Vocal Polyp Patients (정상인과 후두폴립환자에서의 음성학적 측정)

  • 최홍식;장미숙;이정준
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.5 no.1
    • /
    • pp.38-43
    • /
    • 1994
  • Though normal vocal cords show regular vibration, pathologic vocal cords show irregularity between peaks. Jitter means fluctuation in the time interval between peaks, and Shimmer means cycle to cycle variation in the amplitude of the peaks. We investigated the vocal vibration of Korean normal persons objectively. The fundamental frequency, Jitter, Shimmer and SNR(signal to noise ratio) of normal persons were compared with that of vocal Polyp Patients with CSpeech Program for the possibility of distinguishing the pathologic vocal vibration from normal. The results were as follows ; Comparing the fundamental frequency of vocal Polyp Patients with normal persons, great change was noted only in female cases. But the Jitter and Shimmer of vocal polyp patients were greater than normal significantly in both male and female cases. SNR was lower than normal in vocal polyp patients. In the conclusion, fundamental frequency, Jitter, Shimmer and SNR might be meaningful parameters distinguisuing pathologic vibration from normal.

  • PDF

Quality Improvement of Karaoke Mode in SAOC using Cross Prediction based Vocal Estimation Method (교차 예측 기반의 보컬 추정 방법을 이용한 SAOC Karaoke 모드에서의 음질 향상 기법에 대한 연구)

  • Lee, Tung Chin;Park, Young-Cheol;Youn, Dae Hee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.3
    • /
    • pp.227-236
    • /
    • 2013
  • In this paper, we present a vocal suppression algorithm that can enhance the quality of music signal coded using Spatial Audio Object Coding (SAOC) in Karaoke mode. The residual vocal component in the coded music signal is estimated by using a cross prediction method in which the music signal coded in Karaoke mode is used as the primary input and the vocal signal coded in Solo mode is used as a reference. However, the signals are extracted from the same downmix signal and highly correlated, so that the music signal can be severely damaged by the cross prediction. To prevent this, a psycho-acoustic disturbance rule is proposed, in which the level of disturbance to the reference input of the cross prediction filter is adapted according to the auditory masking property. Objective and subjective test were performed and the results confirm that the proposed algorithm offers improved quality.

A Study on the Pitch Detection of Speech Harmonics by the Peak-Fitting (음성 하모닉스 스펙트럼의 피크-피팅을 이용한 피치검출에 관한 연구)

  • Kim, Jong-Kuk;Jo, Wang-Rae;Bae, Myung-Jin
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.85-95
    • /
    • 2003
  • In speech signal processing, it is very important to detect the pitch exactly in speech recognition, synthesis and analysis. If we exactly pitch detect in speech signal, in the analysis, we can use the pitch to obtain properly the vocal tract parameter. It can be used to easily change or to maintain the naturalness and intelligibility of quality in speech synthesis and to eliminate the personality for speaker-independence in speech recognition. In this paper, we proposed a new pitch detection algorithm. First, positive center clipping is process by using the incline of speech in order to emphasize pitch period with a glottal component of removed vocal tract characteristic in time domain. And rough formant envelope is computed through peak-fitting spectrum of original speech signal infrequence domain. Using the roughed formant envelope, obtain the smoothed formant envelope through calculate the linear interpolation. As well get the flattened harmonics waveform with the algebra difference between spectrum of original speech signal and smoothed formant envelope. Inverse fast fourier transform (IFFT) compute this flattened harmonics. After all, we obtain Residual signal which is removed vocal tract element. The performance was compared with LPC and Cepstrum, ACF. Owing to this algorithm, we have obtained the pitch information improved the accuracy of pitch detection and gross error rate is reduced in voice speech region and in transition region of changing the phoneme.

  • PDF

Development of Speech Training Aids Using Vocal Tract Profile (조음도를 이용한 발음훈련기기의 개발)

  • 박상희;김동준;이재혁;윤태성
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.41 no.2
    • /
    • pp.209-216
    • /
    • 1992
  • Deafs train articulation by observing mouth of a tutor, sensing tactually the motions of the vocal organs, or using speech training aids. Present speech training aids for deafs can measure only single speech parameter, or display only frequency spectra in histogram of pseudo-color. In this study, a speech training aids that can display subject's articulation in the form of a cross section of the vocal organs and other speech parameters together in a single system is to be developed and this system makes a subject know where to correct. For our objective, first, speech production mechanism is assumed to be AR model in order to estimate articulatory motions of the vocal organs from speech signal. Next, a vocal tract profile model using LP analysis is made up. And using this model, articulatory motions for Korean vowels are estimated and displayed in the vocal tract profile graphics.

  • PDF

Detection of Glottal Closure Instant for Voiced Speech Using Wavelet Transform (웨이브렛 변환을 이용한 음성신호의 성문폐쇄시점 검출)

  • Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.153-165
    • /
    • 2000
  • During the phonation of voiced sounds, instants exist where the glottis is opened or closed, due to the periodic vibration of the vocal cord. When closed, this is called the glottal closure instant(GCI) or epoch.. The correct detection of the GCI is one of the important problems in speech processing for pitch detection, pitch synchronous analysis, and so on. Recently, it has been shown that the local maxima points of the wavelet transformed speech signal correspond to the GCIs of speech signal. In this paper, we investigate the accuracy of Gels estimated from this wavelet transformed speech signal. For this purpose we compare them with the negative peak points of the differentiated EGG signal that represents the actual GCIs of speech signal.

  • PDF

Flattening Techniques for Pitch Detection (피치 검출을 위한 스펙트럼 평탄화 기법)

  • 김종국;조왕래;배명진
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.381-384
    • /
    • 2002
  • In speech signal processing, it Is very important to detect the pitch exactly in speech recognition, synthesis and analysis. but, it is very difficult to pitch detection from speech signal because of formant and transition amplitude affect. therefore, in this paper, we proposed a pitch detection using the spectrum flattening techniques. Spectrum flattening is to eliminate the formant and transition amplitude affect. In time domain, positive center clipping is process in order to emphasize pitch period with a glottal component of removed vocal tract characteristic. And rough formant envelope is computed through peak-fitting spectrum of original speech signal in frequency domain. As a results, well get the flattened harmonics waveform with the algebra difference between spectrum of original speech signal and smoothed formant envelope. After all, we obtain residual signal which is removed vocal tract element The performance was compared with LPC and Cepstrum, ACF 0wing to this algorithm, we have obtained the pitch information improved the accuracy of pitch detection and gross error rate is reduced in voice speech region and in transition region of changing the phoneme.

  • PDF

A Study on Speech Recognition using Vocal Tract Area Function (성도 면적 함수를 이용한 음성 인식에 관한 연구)

  • 송제혁;김동준
    • Journal of Biomedical Engineering Research
    • /
    • v.16 no.3
    • /
    • pp.345-352
    • /
    • 1995
  • The LPC cepstrum coefficients, which are an acoustic features of speech signal, have been widely used as the feature parameter for various speech recognition systems and showed good performance. The vocal tract area function is a kind of articulatory feature, which is related with the physiological mechanism of speech production. This paper proposes the vocal tract area function as an alternative feature parameter for speech recognition. The linear predictive analysis using Burg algorithm and the vector quantization are performed. Then, recognition experiments for 5 Korean vowels and 10 digits are executed using the conventional LPC cepstrum coefficients and the vocal tract area function. The recognitions using the area function showed the slightly better results than those using the conventional LPC cepstrum coefficients.

  • PDF

Speech training aids for deafs (청각 장애자용 발음 훈련 기기의 개발)

  • 김동준;윤태성;박상희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.746-751
    • /
    • 1991
  • Deafs train articulation by observing mouth of a tutor. sensing tactually the notions of the vocal organs, or using speech training aids. Present speech training aids for deafs can measure only single speech ter, or display only frequency spectra in histogrm or pseudo-color. In this study, a speech training aids that can display subject's articulation in the form of a cross section of the vocal organs and other speech parameters together in a single system Is aimed to develop and this system makes a subject to know where to correct. For our objective, first, speech production mechanism is assumed to be AR model in order to estimate articulatory notions of the vocal tract from speech signal. Next, a vocal tract profile mode using LPC analysis is made up. And using this model, articulatory notions for Korean vowels are estimated and displayed in the vocal tract profile graphics.

  • PDF