• Title/Summary/Keyword: Voice Synthesis

Search Result 103, Processing Time 0.024 seconds

Physiologic Phonetics for Korean Stop Production (한국어 자음생성의 생리음성학적 특성)

  • Hong, Ki-Hwan;Yang, Yoon-Soo
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.17 no.2
    • /
    • pp.89-97
    • /
    • 2006
  • The stop consonants in Korean are classified into three types according to the manner of articulation as unaspirated (UA), slightly aspirated (SA) and heavily aspirated (HA) stops. Both the UA and the HA types are always voiceless in any environment. Generally, the voice onset time (VOT) could be measured spectrographically from release of consonant burst to onset of following vowel. The VOT of the UA type is within 20 msec of the burst, and about 40-50 msec in the SA and 50-70 msec in the HA. There have been many efforts to clarify properties that differentiate these manner categories. Umeda, et $al^{1)}$ studied that the fundamental frequency at voice onset after both the UA and HA consonants was higher than that for the SA consonants, and the voice onset times were longest in the HA followed by the SA and UA. Han, et $al^{2)}$ reported in their speech synthesis and perception studies that the SA and UA stops differed primarily in terms of a gradual versus a relatively rapid intensity build-up of the following vowel after the stop release. Lee, et $al^{3)}$ measured both the intraoral and subglottal air pressure that the subglottal pressure was higher for the HA stop than for the other two stops. They also compared the dynamic pattern of the subglottal pressure slope for the three categories and found that the HA stop showed the most rapid increase in subglottal pressure in the time period immediately before the stop release. $Kagaya^{4)}$ reported fiberscopic and acoustic studies of the Korean stops. He mentioned that the UA type may be characterized by a completely adducted state of the vocal folds, stiffened vocal folds and the abrupt decreasing of the stiffness near the voice onset, while the HA type may be characterized by an extensively abducted state of the vocal folds and a heightened subglottal pressure. On the other hand, none of these positive gestures are observed for the SA type. Hong, et $al^{5)}$ studied electromyographic activity of the thyroarytenoid and posterior cricoarytenoid (PCA) muscles during stop production. He reported a marked and early activation of the PCA muscle associated with a steep reactivation of the thyroarytenoid muscle before voice onset in the production of the HA consonants. For the production of the UA consonants, little or no activation of the PCA muscle and earliest and most marked reactivation of the thyroarytenoid muscle were characteristic. For the SA consonants, he reported a more moderate activation of the PCA muscle than for the UA consonant, and the least and the latest reactivation of the thyroarytenoid muscle. Hong, et $al^{6)}$ studied the observation of the vibratory movements of vocal fold edges in terms of laryngeal gestures according to the different types of stop consonants. The movements of vocal fold edges were evaluated using high speed digital images. EGG signals and acoustic waveforms were also evaluated and related to the vibratory movements of vocal fold edges during stop production.

  • PDF

A Study on PCFBD-MPC in 8kbps (8kbps에 있어서 PCFBD-MPC에 관한 연구)

  • Lee, See-woo
    • Journal of Internet Computing and Services
    • /
    • v.18 no.5
    • /
    • pp.17-22
    • /
    • 2017
  • In a MPC coding using excitation source of voiced and unvoiced, it would be a distortion of speech waveform. This is caused by normalization of synthesis speech waveform of voiced in the process of restoration the multi-pulses of representation section. This paper present PCFBD-MPC( Position Compensation Frequency Band Division-Multi Pulse Coding ) used V/UV/S( Voiced / Unvoiced / Silence ) switching, position compensation in a multi-pulses each pitch interval and Unvoiced approximate-synthesis by using specific frequency in order to reduce distortion of synthesis waveform. Also, I was implemented that the PCFBD-MPC( Position Compensation Frequency Band Division-Multi Pulse Coding ) system and evaluate the SNRseg of PCFBD-MPC in coding condition of 8kbps. As a result, SNRseg of PCFBD-MPC was 13.4dB for female voice and 13.8dB for male voice respectively. In the future, I will study the evaluation of the sound quality of 8kbps speech coding method that simultaneously compensation the amplitude and position of multi-pulse source. These methods are expected to be applied to a method of speech coding using sound source in a low bit rate such as a cellular phone or a smart phone.

A Study on the Pitch Detection of Speech Harmonics by the Peak-Fitting (음성 하모닉스 스펙트럼의 피크-피팅을 이용한 피치검출에 관한 연구)

  • Kim, Jong-Kuk;Jo, Wang-Rae;Bae, Myung-Jin
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.85-95
    • /
    • 2003
  • In speech signal processing, it is very important to detect the pitch exactly in speech recognition, synthesis and analysis. If we exactly pitch detect in speech signal, in the analysis, we can use the pitch to obtain properly the vocal tract parameter. It can be used to easily change or to maintain the naturalness and intelligibility of quality in speech synthesis and to eliminate the personality for speaker-independence in speech recognition. In this paper, we proposed a new pitch detection algorithm. First, positive center clipping is process by using the incline of speech in order to emphasize pitch period with a glottal component of removed vocal tract characteristic in time domain. And rough formant envelope is computed through peak-fitting spectrum of original speech signal infrequence domain. Using the roughed formant envelope, obtain the smoothed formant envelope through calculate the linear interpolation. As well get the flattened harmonics waveform with the algebra difference between spectrum of original speech signal and smoothed formant envelope. Inverse fast fourier transform (IFFT) compute this flattened harmonics. After all, we obtain Residual signal which is removed vocal tract element. The performance was compared with LPC and Cepstrum, ACF. Owing to this algorithm, we have obtained the pitch information improved the accuracy of pitch detection and gross error rate is reduced in voice speech region and in transition region of changing the phoneme.

  • PDF

A Study of Speech Control Tags Based on Semantic Information of a Text (텍스트의 의미 정보에 기반을 둔 음성컨트롤 태그에 관한 연구)

  • Chang, Moon-Soo;Chung, Kyeong-Chae;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.187-200
    • /
    • 2006
  • The speech synthesis technology is widely used and its application area is also being broadened to an automatic response service, a learning system for handicapped person, etc. However, the sound quality of the speech synthesizer has not yet reached to the satisfactory level of users. To make a synthesized speech, the existing synthesizer generates rhythms only by the interval information such as space and comma or by several punctuation marks such as a question mark and an exclamation mark so that it is not easy to generate natural rhythms of people even though it is based on mass speech database. To make up for the problem, there is a way to select rhythms after processing language from a higher level information. This paper proposes a method for generating tags for controling rhythms by analyzing the meaning of sentence with speech situation information. We use the Systemic Functional Grammar (SFG) [4] which analyzes the meaning of sentence with speech situation information considering the sentence prior to the given one, the situation of a conversation, the relationship among people in the conversation, etc. In this study, we generate Semantic Speech Control Tag (SSCT) by the result of SFG's meaning analysis and the voice wave analysis.

  • PDF

Implementation of Korean TTS Service on Android OS (안드로이드 OS 기반 한국어 TTS 서비스의 설계 및 구현)

  • Kim, Tae-Guon;Kim, Bong-Wan;Choi, Dae-Lim;Lee, Yong-Ju
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.1
    • /
    • pp.9-16
    • /
    • 2012
  • Though Android-based smart phones are being released in Korea, Korean TTS engine is not built on them and Google has not announced service or software developer's kit related to Korean TTS officially. Thus, application developers who want to include Korean TTS capability in their application have difficulties. In this paper, we design and implement Android OS-based Korean TTS system and service. For speed, text preprocessing and synthesis libraries are implemented using Android NDK. By using Java's thread mechanism and the AudioTrack class, the response time of TTS is minimized. For the test of implemented service, an application that reads incoming SMS is developed. The test shows that synthesized speech are generated in real-time for random sentences. By using the implemented Korean TTS service, Android application developers can transmit information easily through voice. Korean TTS service proposed and implemented in this paper overcomes shortcomings of the existing restrictive synthesis methods and provides the benefit for application developers and users.

Speech synthesis using acoustic Doppler signal (초음파 도플러 신호를 이용한 음성 합성)

  • Lee, Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.2
    • /
    • pp.134-142
    • /
    • 2016
  • In this paper, a method synthesizing speech signal using the 40 kHz ultrasonic signals reflected from the articulatory muscles was introduced and performance was evaluated. When the ultrasound signals are radiated to articulating face, the Doppler effects caused by movements of lips, jaw, and chin observed. The signals that have different frequencies from that of the transmitted signals are found in the received signals. These ADS (Acoustic-Doppler Signals) were used for estimating of the speech parameters in this study. Prior to synthesizing speech signal, a quantitative correlation analysis between ADS and speech signals was carried out on each frequency bin. According to the results, the feasibility of the ADS-based speech synthesis was validated. ADS-to-speech transformation was achieved by the joint Gaussian mixture model-based conversion rules. The experimental results from the 5 subjects showed that filter bank energy and LPC (Linear Predictive Coefficient) cepstrum coefficients are the optimal features for ADS, and speech, respectively. In the subjective evaluation where synthesized speech signals were obtained using the excitation sources extracted from original speech signals, it was confirmed that the ADS-to-speech conversion method yielded 72.2 % average recognition rates.

Improvement of Synthetic Speech Quality using a New Spectral Smoothing Technique (새로운 스펙트럼 완만화에 의한 합성 음질 개선)

  • 장효종;최형일
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.11
    • /
    • pp.1037-1043
    • /
    • 2003
  • This paper describes a speech synthesis technique using a diphone as an unit phoneme. Speech synthesis is basically accomplished by concatenating unit phonemes, and it's major problem is discontinuity at the connection part between unit phonemes. To solve this problem, this paper proposes a new spectral smoothing technique which reflects not only formant trajectories but also distribution characteristics of spectrum and human's acoustic characteristics. That is, the proposed technique decides the quantity and extent of smoothing by considering human's acoustic characteristics at the connection part of unit phonemes, and then performs spectral smoothing using weights calculated along a time axis at the border of two diphones. The proposed technique reduces the discontinuity and minimizes the distortion which is caused by spectral smoothing. For the purpose of performance evaluation, we tested on five hundred diphones which are extracted from twenty sentences using ETRI Voice DB samples and individually self-recorded samples.

The Development of the Internet Web Browser for the Blind (시각장애을 위한 인터넷 웹 브라우저 개발)

  • 박찬용;장병태김동현
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.829-832
    • /
    • 1998
  • In this paper, We have developed the Internet web browser for the blind and visually impaired person. The Internet Web browser system consists of personal computer connected to Internet, braille display, voice synthesis devices for character information, tactile display for the representation of web image and braille printer for web page printing. We convert character in the web page to braille and print it to braille display. The image in the web page is printed with tactile display actuated by solenoid. The blind can acess the Internet web site with this web browser system and understand the Internet information.

  • PDF

Style Selection for Korean Generation under the Pivot MT System (피봇 기계번역시스템에서의 한국어생성을 위한 문제선정)

  • 이종혁
    • Korean Journal of Cognitive Science
    • /
    • v.1 no.2
    • /
    • pp.279-291
    • /
    • 1989
  • Major difficulties in the style selection,which guarantees the synthesis of good-styled natural expressions under the PIOVT MT system, are an absence of surface-level extra-information in the languageinde-pendent intermediate representation and the language-specific style of expressions due to cultural differences.This paper describes an attempt on the style selection with capabilities of guaranteeing more natural Korean expressions,which includes pragmatic and stylistic decision on target voice genertaion under heavy passive constraints,stylistic changes of sentence-structures,and meaning-supplementation of function words with content words.

Formation of A Phonetic-Value Look-up Table for Korean Voice Synthesis (한국어 음성 합성을 위한 음가 변환 테이블 생성)

  • 이계영;임재걸;이태경
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2001.06a
    • /
    • pp.181-184
    • /
    • 2001
  • 문법적으로 정확한 한국어 음성을 합성하려면 표준어 규정의 '표준 발음법'을 준용해야 한다. 따라서 한국어 음가 합성 시스템에 사용되는 한글을 음성으로 변환하여 주는 규칙은 '표준 발음법'을 완전하게 반영하며 또한 무결해야 한다. 기존의 연구에서는 표준 발음법을 검증없이 적용하여 왔고, 표준 발음법자체에 모순이 있는가의 여부에 대해서도 체계적인 분석을 위한 시도가 전무하였다. 본 논문에서는 한국어 음가 생성의 기본 규칙으로 준용할 표준 발음법을 페트리넷으로 모델링하여 표준 발음법의 일관성을 검증하였다. 그리고, 음운 변동 현상을 설러 단계로 나누어 차례로 적용한다든지, 변동된 단어에 대하여 처음부터 다시 변환 작업을 재수행하는 기존의 음가 생성 방법의 문제점을 해결하기 위하여 한번의 테이블 탐색으로 모든 음운 변동이 완료되는 한국어 음성 합성을 위한 음가 변란 테이블을 구현하였다.

  • PDF