• Title/Summary/Keyword: Korean speech

Search Result 5,325, Processing Time 0.034 seconds

Prosodic Contour Generation for Korean Text-To-Speech System Using Artificial Neural Networks

  • Lim, Un-Cheon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2E
    • /
    • pp.43-50
    • /
    • 2009
  • To get more natural synthetic speech generated by a Korean TTS (Text-To-Speech) system, we have to know all the possible prosodic rules in Korean spoken language. We should find out these rules from linguistic, phonetic information or from real speech. In general, all of these rules should be integrated into a prosody-generation algorithm in a TTS system. But this algorithm cannot cover up all the possible prosodic rules in a language and it is not perfect, so the naturalness of synthesized speech cannot be as good as we expect. ANNs (Artificial Neural Networks) can be trained to learn the prosodic rules in Korean spoken language. To train and test ANNs, we need to prepare the prosodic patterns of all the phonemic segments in a prosodic corpus. A prosodic corpus will include meaningful sentences to represent all the possible prosodic rules. Sentences in the corpus were made by picking up a series of words from the list of PB (phonetically Balanced) isolated words. These sentences in the corpus were read by speakers, recorded, and collected as a speech database. By analyzing recorded real speech, we can extract prosodic pattern about each phoneme, and assign them as target and test patterns for ANNs. ANNs can learn the prosody from natural speech and generate prosodic patterns of the central phonemic segment in phoneme strings as output response of ANNs when phoneme strings of a sentence are given to ANNs as input stimuli.

Analysis of Acoustic Characteristics of Vowel and Consonants Production Study on Speech Proficiency in Esophageal Speech (식도발성의 숙련 정도에 따른 모음의 음향학적 특징과 자음 산출에 대한 연구)

  • Choi, Seong-Hee;Choi, Hong-Shik;Kim, Han-Soo;Lim, Sung-Eun;Lee, Sung-Eun;Pyo, Hwa-Young
    • Speech Sciences
    • /
    • v.10 no.3
    • /
    • pp.7-27
    • /
    • 2003
  • Esophageal Speech uses the esophageal air during phonation. Fluent esophageal speakers frequently intake air in oral communication, but unskilled esophageal speakers are difficult with swallowing lots of air. The purpose of this study was to investigate the difference of acoustic characteristics of vowel and consonants production according to the speech proficiency level in esophageal speech. 13 normal male speakers and 13 male esophageal speakers (5 unskilled esophageal speakers, 8 skilled esophageal speakers) with age ranging from 50 to 70 years old. The stimuli were sustained /a/ vowel and 36 meaningless two syllable words. Used vowel is /a/ and consonants were 18 : /k, n, t, m, p, s, c, $C^{h},\;k^{h},\;t^{h},\;p^{h}$, h, I, k', t', p', s', c'/. Fundermental frequency (Fx), Jitter, shimmer, HNR, MPT were measured with by electroglottography using Lx speech studio (Laryngograph Ltd, London, UK). 36 meaningless words produced by esophageal speakers were presented to 3 speech-language pathologists who phonetically transcribed their responses. Fx, Jitter, HNR parameters is significant different between skilled esophageal speakers and unskilled esophageal speakers (P<.05). Considering manner of articulation, ANOVA showed that differences in two esophageal speech groups on speech proficiency were significant; Glide had the highest number of confusion with the other phoneme class, affricates are the most intelligible in the unskilled esophageal speech group, whereas in the skilled esophageal speech group fricatives resulted highest number of confusions, nasals are the most intelligible. In the place of articulation, glottal /h/ is the highest confusion consonant in both groups. Bilabials are the most intelligible in the skilled esophageal speech, velars are the most intelligible in the unskilled esophageal speech. In the structure of syllable, 'CV+V' is more confusion in the skilled esophageal group, unskilled esophageal speech group has similar confusion in both structures. In unskilled esophageal speech, significantly different Fx, Jitter, HNR acoustic parameters of vowel and the highest confusions of Liquid, Nasals consonants could be attributed to unstable, improper contact of neoglottis as vibratory source and insufficiency in the phonatory air supply, and higher motoric demand of remaining articulation due to morphological characteristics of vocal tract after laryngectomy.

  • PDF

Detection of Laryngeal Pathology in Speech Using Multilayer Perceptron Neural Networks (다층 퍼셉트론 신경회로망을 이용한 후두 질환 음성 식별)

  • Kang Hyun Min;Kim Yoo Shin;Kim Hyung Soon
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.115-118
    • /
    • 2002
  • Neural networks have been known to have great discriminative power in pattern classification problems. In this paper, the multilayer perceptron neural networks are employed to automatically detect laryngeal pathology in speech. Also new feature parameters are introduced which can reflect the periodicity of speech and its perturbation. These parameters and cepstral coefficients are used as input of the multilayer perceptron neural networks. According to the experiment using Korean disordered speech database, incorporation of new parameters with cepstral coefficients outperforms the case with only cepstral coefficients.

  • PDF

Performance Analysis of Automatic Mispronunciation Detection Using Speech Recognizer (음성인식기를 이용한 발음오류 자동분류 결과 분석)

  • Kang Hyowon;Lee Sangpil;Bae Minyoung;Lee Jaekang;Kwon Chulhong
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.29-32
    • /
    • 2003
  • This paper proposes an automatic pronunciation correction system which provides users with correction guidelines for each pronunciation error. For this purpose, we develop an HMM speech recognizer which automatically classifies pronunciation errors when Korean speaks foreign language. And, we collect speech database of native and nonnative speakers using phonetically balanced word lists. We perform analysis of mispronunciation types from the experiment of automatic mispronunciation detection using speech recognizer.

  • PDF

Prosody of cerebral palsic adults' speech (뇌성마비 성인 발화의 운율 특징)

  • Lee, Sook-Hyang;Ko, Hyun-Ju;Kim, Soo-Jin
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.49-51
    • /
    • 2007
  • The purpose of this study is to investigate prosodic characteristics of cerebral palsic adults' speech. The results showed some correlations between their articulation scores and prosodic properties of their speech: speakers with low articulation scores showed slower speech rate, larger number of IPs and pauses, and longer duration of pauses. They also showed steeper slopes of [L +H] in their APs.

  • PDF

A Comparison of Effective Feature Vectors for Speech Emotion Recognition (음성신호기반의 감정인식의 특징 벡터 비교)

  • Shin, Bo-Ra;Lee, Soek-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.10
    • /
    • pp.1364-1369
    • /
    • 2018
  • Speech emotion recognition, which aims to classify speaker's emotional states through speech signals, is one of the essential tasks for making Human-machine interaction (HMI) more natural and realistic. Voice expressions are one of the main information channels in interpersonal communication. However, existing speech emotion recognition technology has not achieved satisfactory performances, probably because of the lack of effective emotion-related features. This paper provides a survey on various features used for speech emotional recognition and discusses which features or which combinations of the features are valuable and meaningful for the emotional recognition classification. The main aim of this paper is to discuss and compare various approaches used for feature extraction and to propose a basis for extracting useful features in order to improve SER performance.

A Study on Korean Isolated Word Speech Detection and Recognition using Wavelet Feature Parameter (Wavelet 특징 파라미터를 이용한 한국어 고립 단어 음성 검출 및 인식에 관한 연구)

  • Lee, Jun-Hwan;Lee, Sang-Beom
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.7
    • /
    • pp.2238-2245
    • /
    • 2000
  • In this papr, eatue parameters, extracted using Wavelet transform for Korean isolated worked speech, are sued for speech detection and recognition feature. As a result of the speech detection, it is shown that it produces more exact detection result than eh method of using energy and zero-crossing rate on speech boundary. Also, as a result of the method with which the feature parameter of MFCC, which is applied to he recognition, it is shown that the result is equal to the result of the feature parameter of MFCC using FFT in speech recognition. So, it has been verified the usefulness of feature parameters using Wavelet transform for speech analysis and recognition.

  • PDF

Phonetic Contrasts of One-syllable Words and Speech Intelligibility in Adults with Hearing Impairments (청각장애 성인의 일음절 낱말대조 명료도 특성)

  • Kim Soo-Jin;Do Yeon-Ji
    • MALSORI
    • /
    • no.56
    • /
    • pp.1-13
    • /
    • 2005
  • This study examined the speech intelligibility of one-syllable words with phonetic contrasts and analyzed segmental factors that can predict the overall speech intelligibility in hearing-impaired adults. To identify the speech error characteristics, a Korean word list was audio-recorded by 7 hearing-impaired adults, and 35 listeners selected the heard word out of 5 choices. Based in part on previous studies of speech of the hearing impaired, the word list consisted of monosyllabic consonant-vowel-consonant (CVC) real word pairs. Stimulus words included 77 phonetic contrast pairs. The results showed that the percentage of errors in final position (coda) contrast was higher than in any other position in syllable. And the intelligibility deficit factors of phonetic contrast in the hearing-impaired were analyzed through stepwise regression analysis. The overall intelligibility was predicted by the error rate of manner contrast at coda, voicing contrast (homorganic triplets) at onset and high-low contrast at nucleus.

  • PDF

Development of Ambulatory Speech Audiometric System (휴대용 어음청력검사 시스템 구현)

  • Shin, Seung-Won;Kim, Kyeong-Seop;Lee, Sang-Min;Im, Won-Jin;Lee, Jeong-Whan;Kim, Dong-Jun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.3
    • /
    • pp.645-654
    • /
    • 2009
  • In this study, we present an efficient ambulatory speech audiometric system to detect one's hearing problems at an earlier stage as possible without his or her visit to the audiometric testing facility such in a hospital or a clinic. To estimate a person's hearing threshold level in terms of speech sound response in his or her local environment, a digital assistant(PDA) device is used to generate the speech sound with implementing audiometric Graphic User Interface(GUI) system. Furthermore, a supra-aural earphone is used to measure a subject's hearing threshold level in terms of speech sound by the compensating the transducer's gain by adopting speech sound calibration system.

Modality-Based Sentence-Final Intonation Prediction for Korean Conversational-Style Text-to-Speech Systems

  • Oh, Seung-Shin;Kim, Sang-Hun
    • ETRI Journal
    • /
    • v.28 no.6
    • /
    • pp.807-810
    • /
    • 2006
  • This letter presents a prediction model for sentence-final intonations for Korean conversational-style text-to-speech systems in which we introduce the linguistic feature of 'modality' as a new parameter. Based on their function and meaning, we classify tonal forms in speech data into tone types meaningful for speech synthesis and use the result of this classification to build our prediction model using a tree structured classification algorithm. In order to show that modality is more effective for the prediction model than features such as sentence type or speech act, an experiment is performed on a test set of 970 utterances with a training set of 3,883 utterances. The results show that modality makes a higher contribution to the determination of sentence-final intonation than sentence type or speech act, and that prediction accuracy improves up to 25% when the feature of modality is introduced.

  • PDF