• Title/Summary/Keyword: formant synthesis

Search Result 34, Processing Time 0.021 seconds

Improvement of Synthetic Speech Quality using a New Spectral Smoothing Technique (새로운 스펙트럼 완만화에 의한 합성 음질 개선)

  • 장효종;최형일
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.11
    • /
    • pp.1037-1043
    • /
    • 2003
  • This paper describes a speech synthesis technique using a diphone as an unit phoneme. Speech synthesis is basically accomplished by concatenating unit phonemes, and it's major problem is discontinuity at the connection part between unit phonemes. To solve this problem, this paper proposes a new spectral smoothing technique which reflects not only formant trajectories but also distribution characteristics of spectrum and human's acoustic characteristics. That is, the proposed technique decides the quantity and extent of smoothing by considering human's acoustic characteristics at the connection part of unit phonemes, and then performs spectral smoothing using weights calculated along a time axis at the border of two diphones. The proposed technique reduces the discontinuity and minimizes the distortion which is caused by spectral smoothing. For the purpose of performance evaluation, we tested on five hundred diphones which are extracted from twenty sentences using ETRI Voice DB samples and individually self-recorded samples.

Preliminary Study on Synthesis of Emotional Speech (정서음성 합성을 위한 예비연구)

  • Han Youngho;Yi Sopae;Lee Jung-Chul;Kim Hyung Soon
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.181-184
    • /
    • 2003
  • This paper explores the perceptual relevance of acoustical correlates of emotional speech by using formant synthesizer. The focus is on the role of mean pitch, pitch range, speed rate and phonation type when it comes to synthesizing emotional speech. The result of this research is backing up the traditional impressionistic observations. However it suggests that some phonation types should be synthesized with further refinement.

  • PDF

A Study on Voice Color Control Rules for Speech Synthesis System (음성합성시스템을 위한 음색제어규칙 연구)

  • Kim, Jin-Young;Eom, Ki-Wan
    • Speech Sciences
    • /
    • v.2
    • /
    • pp.25-44
    • /
    • 1997
  • When listening the various speech synthesis systems developed and being used in our country, we find that though the quality of these systems has improved, they lack naturalness. Moreover, since the voice color of these systems are limited to only one recorded speech DB, it is necessary to record another speech DB to create different voice colors. 'Voice Color' is an abstract concept that characterizes voice personality. So speech synthesis systems need a voice color control function to create various voices. The aim of this study is to examine several factors of voice color control rules for the text-to-speech system which makes natural and various voice types for the sounding of synthetic speech. In order to find such rules from natural speech, glottal source parameters and frequency characteristics of the vocal tract for several voice colors have been studied. In this paper voice colors were catalogued as: deep, sonorous, thick, soft, harsh, high tone, shrill, and weak. For the voice source model, the LF-model was used and for the frequency characteristics of vocal tract, the formant frequencies, bandwidths, and amplitudes were used. These acoustic parameters were tested through multiple regression analysis to achieve the general relation between these parameters and voice colors.

  • PDF

A Study on the Technique of Spectrum Flattening for Improved Pitch Detection (개선된 피치검출을 위한 스펙트럼 평탄화 기법에 관한 연구)

  • 강은영;배명진;민소연
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.310-314
    • /
    • 2002
  • The exact pitch (fundamental frequency) extraction is important in speech signal processing like speech recognition, speech analysis and synthesis. However the exact pitch extraction from speech signal is very difficult due to the effect of formant and transitional amplitude. So in this paper, the pitch is detected after the elimination of formant ingredients by flattening the spectrum in frequency region. The effect of the transition and change of phoneme is low in frequency region. In this paper we proposed the new flattening method of log spectrum and the performance was compared with LPC method and Cepstrum method. The results show the proposed method is better than conventional method.

A Study on the Synthesis of Korean Speech by Formant VOCODER (포르만트 VOCODER에 의한 한국어 음성합성에 관한 연구)

  • 허강인;이대영
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.14 no.6
    • /
    • pp.699-712
    • /
    • 1989
  • This paper describes a method of Korean speech synhes is using format VOCODER. The parameters of speech synthes is are a follows, 1) format F1, F2, and F3 by spectrum moment method and F4, F5 using the length of vocal tract. 2) pitch frequencies obtained by optimu, Comb method using AMDF. 3) short time average energy and short time mean amplitude. 4) The decision method of bandwidth reportd by Fant. 5) voicde/unvoiced discrimination using zerocrossing. 6) excitation wave reported by Rosenberg. 7) gaussian white noise. Synthesis results are in fairly good agreement with original speech.

  • PDF

Spectral Subtraction Usnig Whitening Filter for Reducing Residual Noise (잔류잡음 감소를 위한 백색화 스펙트럼 차감법)

  • 오태호
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06e
    • /
    • pp.411-414
    • /
    • 1998
  • 음성의 음질 향상(Speech Enhancement)을 위한 여러 가지 방법 중에서 주파수 차감법(Spectral Subtraction)은 계산량이 적기 때문에 현재 실시간으로 Speech Enhancement를 할 수 있는 가장 적절한 방법이다. 그러나, 이 방법은 원래의 입력음성에 없던 새로운 잡음을 만들어내는 큰 단점이 있는데, 이를 제거하기 위해 많은 연구가 되어오고 있다. 이러한 연구의 방향은 대부분 주변프레임 또는 주변의 주파수 성분과의 평균을 통해 피크값을 무디게 해 줌으로써 새로 생긴 튀는 잡음을 감소시키는 것이다. 이런 방법은 음성자체의 정보 또한 평균이 되어버리게 하는 새로운 단점을 낳는데, 이런 현상은 무성음구간에서 특히 심각해진다. 본 논문에서는 입력음성의 LPC 분석으로 백색필터(Whitening Filter)를 구성하여 이를 통과시킨 잔류신호(Residual)를 주파수 차감하여 얻은 새로운 잔류신호를 역 필터링하여(Synthesis Filter) 개선된 음성을 얻는 방법을 제안하였다. 제안된 알고리듬은, 주파수 차감시 포만트(Formant)의 정보가 더 유지 될 수 있기 때문에 잔류잡음을 줄일 수 있다. 청취 테스트 결과 제안한 방법이 기존의 방법보다 잔류잡음을 더 줄이는 사실을 확인할 수 있었다.

  • PDF

Study on the Korea Consonont by Frequency Analyzing (한글 자음의 주파수 분석적인 연구)

  • 신용철;최진태
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.10 no.3
    • /
    • pp.61-68
    • /
    • 1973
  • It was found that the rapidly changeable consonant's conlponents of the Korean speech can not be analyzed precisely by the method of frequency analysis, but only by the method of frequency synthesis based on the speech pattern obtained from sona-graph. The following two methods mer mainly: One was to extract the frequency region of Consortarts, and the other was to observe how the mode of the Formant of the Korean Vowel, $\mid$$\mid$, following after some consonants. changes both by the articulation manner and by the articulation position.

  • PDF

Study on formant transition for improvement of speech synthesis (음성 합성의 개선을 위한 포만트 변경에 관한 연구)

  • Lee Sang-hyun;Yang Sung-il;Kwon Y.
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.41-44
    • /
    • 2001
  • 본 논문에서는 음성합성 과정에서 음성유닛을 연결할 때 모음의 결합부분에서 포만트의 불일치로 일어나는 부자연스러운 합성음이 발생되는 문제점을 개선하기 위해서 앞에 오는 음성 유닛과 뒤에 오는 합성 유닛의 포만트 변경에 관한 방법을 제안한다. 요즘에 연구되는 코퍼스 방식에선 에너지와 피치와 음순지속시간 등을 기준으로 유닛을 선택한 후 연결하지만, 스펙트럼의 불일치가 이루어진다. 이런 스펙트럼의 불일치는 음질의 저하를 유도한다. 그래서 앞 음성유닛의 연결부분의 일정부분과 뒤 음성 유닛의 연결부분의 일정부분의 포만트를 천이시켜 일치시켜줌으로써 음질을 향상시켰다. 음성신호를 FFT한 후 magnitude와 phase를 분리한 후 앞 음성의 연결부분의 magnitude와 뒷 음성의 연결부분의 magnitude를 기준으로 linear interpolation한 값을 목표치로 이동하고 다시 합하여 원 신호를 복원하는 방식으로 포만트를 변경시켰다.

  • PDF

Spectral Modeling of Haegeum Using Cepstral Analysis (캡스트럼 분석을 이용한 해금의 스펙트럼 모델링)

  • Hong, Yeon-Woo;Kang, Myeong-Su;Cho, Sang-Jin;Kim, Jong-Myon;Lee, Jung-Chul;Chong, Ui-Pil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.243-250
    • /
    • 2010
  • This paper proposes a spectral modeling of Korean traditional instrument, Haegeum, using cepstral analysis to naturally describe Haegeum sounds varying with time. To get a precise result of cepstral analysis, we set the frame size to 3 periods of input signal and more cepstral coefficients are used to extract formants. The performance is enhanced by flexibly controlling the cutoff frequency of bandpass filter depending on the resonances in the synthesis process of sinusoidal components and the deleting peaks remained in the residual signal. To detect the change of pitch, we divide the input frames into silence, attack, and sustain region and determine which region the current frame is involved in. Then, the proposed method readjusts the frame size according to the fundamental frequency in the case of the current frame is in attack region and corrects the extraction errors of the fundamental frequency for the frames in sustain region. With these processes, the synthesized sounds are much more similar to the originals. The evaluation result through the listening test by a Haegeum player says that the synthesized sounds are almost similar to originals (96~100 % similar to the original sounds).

A comparison of CPP analysis among breathiness ranks (기식 등급에 따른 CPP (Cepstral Peak Prominence) 분석 비교)

  • Kang, Youngae;Koo, Bonseok;Jo, Cheolwoo
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.21-26
    • /
    • 2015
  • The aim of this study is to synthesize pathological breathy voice and to make a cepstral peak prominence (CPP) table following breathiness ranks by cepstral analysis to supplement reliability of the perceptual auditory judgment task. KlattGrid synthesizer included in Praat was used. Synthesis parameters consist of two groups, i.e., constants and variables. Constant parameters are pitch, amplitude, flutter, open phase, oral formant and bandwidth. Variable parameters are breathiness (BR), aspiration amplitude (AH), and spectral tilt (TL). Five hundred sixty samples of synthetic breathy vowel /a/ for male were created. Three raters participated in ranking of the breathiness. 217 were proved to be inadequate samples from perceptual judgment and cepstral analysis. Finally, 343 samples were selected. These CPP values and other related parameters from cepstral analysis are classified under four breathiness ranks (B0~B3). The mean and standard deviation of CPP is $16.10{\pm}1.15$ dB(B0), $13.68{\pm}1.34$ dB(B1), $10.97{\pm}1.41$ dB(B2), and $3.03{\pm}4.07$ dB(B3). The value of CPP decreases toward the severe group of breathiness because there is a lot of noise and a small quantity of harmonics.