• 제목/요약/키워드: Voice synthesis

검색결과 102건 처리시간 0.028초

음성합성을 이용한 병적 음성의 치료 결과에 대한 예측 (Prediction of Post-Treatment Outcome of Pathologic Voice Using Voice Synthesis)

  • 이주환;최홍식;김영호;김한수;최현승;김광문
    • 대한후두음성언어의학회지
    • /
    • 제14권1호
    • /
    • pp.30-39
    • /
    • 2003
  • Background and Objectives : Patients with pathologic voice often concern about recovery of voice after surgery. In our investigation, we give controlled values of three parameters of voice synthesis program of Dr. Speech Science. such as jitter, shimmer, and NNE(normalized noise energy) which characterize someone's voice from others and deviced a method to synthesize the predicted voice after performing operation. Subjects and Method : Values of vocal jitter, vocal shimmer, and glottal noise were measured with voices of 10 vocal cord Paralysis and 10 vocal Polyp Patients 1 week Prior to and 1 month after the surgery. With Dr. Speech science voice synthesis program we synthesized 'ae' vowel which is closely identical to preoperative and post-operative voice of the patients by controlling the values of jitter, shimmer, and glottal noise. then we analyzed the synthesized voices and compared with pre and post-operative voice. Results : 1) After inputting the preoperative and corrected values of jitter, shimmer, and glottal noise into the voice synthesis Program, voices identical to vocal Polyp Patients' Pre- and Postoperative voices withiin statistical significance were synthesized 2) After elimination of synergistic effects between three paramenter, we were able to synthesize voice identical to vocal paralysis patients' preoperative voices. 3) After inputting only slightly increased jitter, shimmer into the synthesis program, we were able to synthesize voice identical to vocal cord paralysis patients' postoperative voices. Conclusion : Voices synthesized with Dr. Speech science program were identical to patients' actual pre and postoperative voice, and clinicians will be able to give the patients more information and thus increased patients cooperability can be expected.

  • PDF

음성합성시스템을 위한 음색제어규칙 연구 (A Study on Voice Color Control Rules for Speech Synthesis System)

  • 김진영;엄기완
    • 음성과학
    • /
    • 제2권
    • /
    • pp.25-44
    • /
    • 1997
  • When listening the various speech synthesis systems developed and being used in our country, we find that though the quality of these systems has improved, they lack naturalness. Moreover, since the voice color of these systems are limited to only one recorded speech DB, it is necessary to record another speech DB to create different voice colors. 'Voice Color' is an abstract concept that characterizes voice personality. So speech synthesis systems need a voice color control function to create various voices. The aim of this study is to examine several factors of voice color control rules for the text-to-speech system which makes natural and various voice types for the sounding of synthetic speech. In order to find such rules from natural speech, glottal source parameters and frequency characteristics of the vocal tract for several voice colors have been studied. In this paper voice colors were catalogued as: deep, sonorous, thick, soft, harsh, high tone, shrill, and weak. For the voice source model, the LF-model was used and for the frequency characteristics of vocal tract, the formant frequencies, bandwidths, and amplitudes were used. These acoustic parameters were tested through multiple regression analysis to achieve the general relation between these parameters and voice colors.

  • PDF

기본주파수와 성도길이의 상관관계를 이용한 HTS 음성합성기에서의 목소리 변환 (Voice transformation for HTS using correlation between fundamental frequency and vocal tract length)

  • 유효근;김영관;서영주;김회린
    • 말소리와 음성과학
    • /
    • 제9권1호
    • /
    • pp.41-47
    • /
    • 2017
  • The main advantage of the statistical parametric speech synthesis is its flexibility in changing voice characteristics. A personalized text-to-speech(TTS) system can be implemented by combining a speech synthesis system and a voice transformation system, and it is widely used in many application areas. It is known that the fundamental frequency and the spectral envelope of speech signal can be independently modified to convert the voice characteristics. Also it is important to maintain naturalness of the transformed speech. In this paper, a speech synthesis system based on Hidden Markov Model(HMM-based speech synthesis, HTS) using the STRAIGHT vocoder is constructed and voice transformation is conducted by modifying the fundamental frequency and spectral envelope. The fundamental frequency is transformed in a scaling method, and the spectral envelope is transformed through frequency warping method to control the speaker's vocal tract length. In particular, this study proposes a voice transformation method using the correlation between fundamental frequency and vocal tract length. Subjective evaluations were conducted to assess preference and mean opinion scores(MOS) for naturalness of synthetic speech. Experimental results showed that the proposed voice transformation method achieved higher preference than baseline systems while maintaining the naturalness of the speech quality.

Dr. Speech Science의 음성합성프로그램을 이용하여 합성한 정상음성과 병적음성(Pathologic Voice)의 음향학적 분석 (Acoustic Analysis of Normal and Pathologic Voice Synthesized with Voice Synthesis Program of Dr. Speech Science)

  • 최홍식;김성수
    • 대한후두음성언어의학회지
    • /
    • 제12권2호
    • /
    • pp.115-120
    • /
    • 2001
  • In this paper, we synthesized vowel /ae/ with voice synthesis program of Dr. Speech Science, and we also synthesized pathologic vowel /ae/ by some parameters such as high frequency gain (HFG), low frequency gain(LFG), pitch flutter(PF) which represents jitter value and flutter of amplitude(FA) which represents shimmer value, and grade ranked as mild, moderate and severe respectively. And then we analysed all pathologic voice by analysis program of Dr. Speech Science. We expect that this synthesized pathologic voices are useful for understanding the parameter such as noise, jitter and shimmer and feedback effect to patient with voice disorder.

  • PDF

HMM 기반 TTS와 MusicXML을 이용한 노래음 합성 (Singing Voice Synthesis Using HMM Based TTS and MusicXML)

  • 칸 나지브 울라;이정철
    • 한국컴퓨터정보학회논문지
    • /
    • 제20권5호
    • /
    • pp.53-63
    • /
    • 2015
  • 노래음 합성이란 주어진 가사와 악보를 이용하여 컴퓨터에서 노래음을 생성하는 것이다. 텍스트/음성 변환기에 널리 사용된 HMM 기반 음성합성기는 최근 노래음 합성에도 적용되고 있다. 그러나 기존의 구현방법에는 대용량의 노래음 데이터베이스 수집과 학습이 필요하여 구현에 어려움이 있다. 또한 기존의 상용 노래음 합성시스템은 피아노 롤 방식의 악보 표현방식을 사용하고 있어 일반인에게는 익숙하지 않으므로 읽기 쉬운 표준 악보형식의 사용자 인터페이스를 지원하여 노래 학습의 편의성을 향상시킬 필요가 있다. 이 문제를 해결하기 위하여 본 논문에서는 기존 낭독형 음성합성기의 HMM 모델을 이용하고 노래음에 적합한 피치값과 지속시간 제어방법을 적용하여 HMM 모델 파라미터 값을 변화시킴으로서 노래음을 생성하는 방법을 제안한다. 그리고 음표와 가사를 입력하기 위한 MusicXML 기반의 악보편집기를 전단으로, HMM 기반의 텍스트/음성 변환 합성기를 합성기 후단으로서 사용하여 노래음 합성시스템을 구현하는 방법을 제안한다. 본 논문에서 제안하는 방법을 이용하여 합성된 노래음을 평가하였으며 평가결과 활용 가능성을 확인하였다.

HMM 기반의 한국어 음성합성에서 음색변환에 관한 연구 (A Study on the Voice Conversion with HMM-based Korean Speech Synthesis)

  • 김일환;배건성
    • 대한음성학회지:말소리
    • /
    • 제68권
    • /
    • pp.65-74
    • /
    • 2008
  • A statistical parametric speech synthesis system based on the hidden Markov models (HMMs) has grown in popularity over the last few years, because it needs less memory and low computation complexity and is suitable for the embedded system in comparison with a corpus-based unit concatenation text-to-speech (TTS) system. It also has the advantage that voice characteristics of the synthetic speech can be modified easily by transforming HMM parameters appropriately. In this paper, we present experimental results of voice characteristics conversion using the HMM-based Korean speech synthesis system. The results have shown that conversion of voice characteristics could be achieved using a few sentences uttered by a target speaker. Synthetic speech generated from adapted models with only ten sentences was very close to that from the speaker dependent models trained using 646 sentences.

  • PDF

음악제작을 위한 음성합성엔진의 활용과 기술 (Application and Technology of Voice Synthesis Engine for Music Production)

  • 박병규
    • 디지털콘텐츠학회 논문지
    • /
    • 제11권2호
    • /
    • pp.235-242
    • /
    • 2010
  • 음악제작에 쓰이는 음성합성엔진은 악기 소리와 음색의 합성에 머물던 과거의 신디사이저와는 달리, 인간의 목소리를 각 음소에 따라 샘플화하여 탑재함과 동시에 각 음소의 연결을 주파수 영역 내에서 자연스럽게 처리함으로써 실제 사람이 노래하는 것과 같은 수준까지 도달하게 되었다. 사용자들은 이러한 음성합성엔진을 음악제작에 국한하여 쓰지 않고 캐릭터를 활용한 콘서트, 영상제작, 음반, 모바일 서비스 등 2차 창작물로 새로운 음악의 형태를 창조하며 문화적 패러다임을 바꾸어 나가고 있다. 현재 음성합성엔진 기술은 악보 편집기를 통하여 사용자가 원하는 음과 가사, 그리고 음악적 표현 파라미터를 입력한 뒤, 실제 가성 샘플을 데이터베이스에서 가져와 합성엔진에서 발음들을 조합, 연결하여 노래하는 것을 가능하게 한다. 이러한 컴퓨터음악 기술의 발전으로 인해 파생된 새로운 음악 형태들은 문화적으로 큰 반향을 불러일으키고 있다. 이에 따라 본 논문은 구체적 활용 사례를 살펴보고 합성기술을 탐색함으로써, 사용자들이 음성합성엔진을 이해하고 습득하는 데 기여함과 동시에 그들의 다양한 음악제작에 도움이 되고자 한다.

음성의 준주기적 현상 분석 및 구현에 관한 연구 (Analysis and synthesis of pseudo-periodicity on voice using source model approach)

  • 조철우
    • 말소리와 음성과학
    • /
    • 제8권4호
    • /
    • pp.89-95
    • /
    • 2016
  • The purpose of this work is to analyze and synthesize the pseudo-periodicity of voice using a source model. A speech signal has periodic characteristics; however, it is not completely periodic. While periodicity contributes significantly to the production of prosody, emotional status, etc., pseudo-periodicity contributes to the distinctions between normal and abnormal status, the naturalness of normal speech, etc. Measurement of pseudo-periodicity is typically performed through parameters such as jitter and shimmer. For studying the pseudo-periodic nature of voice in a controlled environment, through collected natural voice, we can only observe the distributions of the parameters, which are limited by the size of collected data. If we can generate voice samples in a controlled manner, experiments that are more diverse can be conducted. In this study, the probability distributions of vowel pitch variation are obtained from the speech signal. Based on the probability distribution of vocal folds, pulses with a designated jitter value are synthesized. Then, the target and re-analyzed jitter values are compared to check the validity of the method. It was found that the jitter synthesis method is useful for normal voice synthesis.

대화식 휴대용 영어학습기 개발 (Development of Portable Conversation-Type English Leaner)

  • 유재택;윤태섭
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2004년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.147-149
    • /
    • 2004
  • Although most of the people have studied English for a long time, their English conversation capability is low. When we provide them portable conversational-type English learners by the application of computer and information process technology, such portable learners can be used to enhance their English conversation capability by their conventional conversation exercises. The core technology to develop such learner is the development of a voice recognition and synthesis module under an embedded environment. This paper deals with voice recognition and synthesis, prototype of the learner module using a DSP(Digital Signal Processing) chip for voice processing, voice playback function, flash memory file system, PC download function using USB ports, English conversation text function by the use of SMC(Smart Media Card) flash memory, LCD display function, MP3 music listening function, etc. Application areas of the prototype equipped with such various functions are vast, i.e. portable language learners, amusement devices, kids toy, control by voice, security by the use of voice, etc.

  • PDF

Jitter 합성에 의한 음질변환에 관한 연구 (Voice quality transform using jitter synthesis)

  • 조철우
    • 말소리와 음성과학
    • /
    • 제10권4호
    • /
    • pp.121-125
    • /
    • 2018
  • This paper describes procedures of changing and measuring voice quality in terms of jitter. Jitter synthesis method was applied to the TD-PSOLA analysis system of the Praat software. The jitter component is synthesized based on a Gaussian random noise model. The TD-PSOLA re-synthesize process is used to synthesize the modified voice with artificial jitter. Various vocal jitter parameters are used to measure the change in quality caused by artificial systematic jitter change. Synthetic vowels, natural vowels and short sentences are used to check the change in voice quality through the synthesizer model. The results shows that the suggested method is useful for voice quality control in a limited way and can be used to alter the jitter component of voice.