• Title/Summary/Keyword: speech corpora

Search Result 39, Processing Time 0.023 seconds

Weighted Finite State Transducer-Based Endpoint Detection Using Probabilistic Decision Logic

  • Chung, Hoon;Lee, Sung Joo;Lee, Yun Keun
    • ETRI Journal
    • /
    • v.36 no.5
    • /
    • pp.714-720
    • /
    • 2014
  • In this paper, we propose the use of data-driven probabilistic utterance-level decision logic to improve Weighted Finite State Transducer (WFST)-based endpoint detection. In general, endpoint detection is dealt with using two cascaded decision processes. The first process is frame-level speech/non-speech classification based on statistical hypothesis testing, and the second process is a heuristic-knowledge-based utterance-level speech boundary decision. To handle these two processes within a unified framework, we propose a WFST-based approach. However, a WFST-based approach has the same limitations as conventional approaches in that the utterance-level decision is based on heuristic knowledge and the decision parameters are tuned sequentially. Therefore, to obtain decision knowledge from a speech corpus and optimize the parameters at the same time, we propose the use of data-driven probabilistic utterance-level decision logic. The proposed method reduces the average detection failure rate by about 14% for various noisy-speech corpora collected for an endpoint detection evaluation.

Basic consideration for assessment of Korean TTS system (한국어 TTS 시스템의 객관적인 성능평가를 위한 기초검토)

  • Ko, Lag-Hwan;Kim, Young-Il;Kim, Bong-Wan;Lee, Yong-Ju
    • Proceedings of the KSPS conference
    • /
    • 2005.04a
    • /
    • pp.37-40
    • /
    • 2005
  • Recently due to the rapid development of speech synthesis based on the corpora, the performance of TTS systems, which convert text into speech through synthesis, has enhanced, and they are applied in various fields. However, the procedure for objective assessment of the performance of systems is not well established in Korea. The establishment of the procedure for objective assessment of the performance of systems is essential for the assessment of development systems for the developers and as the standard for choosing the suitable systems for the users. In this paper we will report on the results of the basic research for the establishment of the systematic standard for the procedure of objective assessment of the performance of Korean TTS systems with reference to the various attempts for this project in Korea and other countries.

  • PDF

Maximum mutual information estimation linear spectral transform based adaptation (Maximum mutual information estimation을 이용한 linear spectral transformation 기반의 adaptation)

  • Yoo, Bong-Soo;Kim, Dong-Hyun;Yook, Dong-Suk
    • Proceedings of the KSPS conference
    • /
    • 2005.04a
    • /
    • pp.53-56
    • /
    • 2005
  • In this paper, we propose a transformation based robust adaptation technique that uses the maximum mutual information(MMI) estimation for the objective function and the linear spectral transformation(LST) for adaptation. LST is an adaptation method that deals with environmental noises in the linear spectral domain, so that a small number of parameters can be used for fast adaptation. The proposed technique is called MMI-LST, and evaluated on TIMIT and FFMTIMIT corpora to show that it is advantageous when only a small amount of adaptation speech is used.

  • PDF

Reduction and Frequency Analyses of Vowels and Consonants in the Buckeye Speech Corpus

  • Yang, Byung-Gon
    • Phonetics and Speech Sciences
    • /
    • v.4 no.3
    • /
    • pp.75-83
    • /
    • 2012
  • The aims of this study were three. First, to examine the degree of deviation from dictionary prescribed symbols and actual speech made by American English speakers. Second, to measure the frequency of vowel and consonant production of American English speakers. And third, to investigate gender differences in the segmental sounds in a speech corpus. The Buckeye Speech Corpus was recorded by forty American male and female subjects for one hour per subject. The vowels and consonants in both the phonemic and phonetic transcriptions were extracted from the original files of the corpus and their frequencies were obtained using codes of a free software R. Results were as follows: Firstly, the American English speakers produced a reduced number of vowels and consonants in daily conversation. The reduction rate from the dictionary transcriptions to the actual transcriptions was around 38.2%. Secondly, the American English speakers used more front high and back low vowels while three-fourths of the consonants accounted for stops, fricatives, and nasals. This indicates that the segmental inventory has nonlinear frequency distribution in the speech corpus. Thirdly, the two gender groups produced vowels and consonants similarly even though there were a few noticeable differences in their speech. From these results we propose that English teachers consider pronunciation education reflecting the actual speech sounds and that linguists find a way to establish unmarked segmentals from speech corpora.

English-Korean speech translation corpus (EnKoST-C): Construction procedure and evaluation results

  • Jeong-Uk Bang;Joon-Gyu Maeng;Jun Park;Seung Yun;Sang-Hun Kim
    • ETRI Journal
    • /
    • v.45 no.1
    • /
    • pp.18-27
    • /
    • 2023
  • We present an English-Korean speech translation corpus, named EnKoST-C. End-to-end model training for speech translation tasks often suffers from a lack of parallel data, such as speech data in the source language and equivalent text data in the target language. Most available public speech translation corpora were developed for European languages, and there is currently no public corpus for English-Korean end-to-end speech translation. Thus, we created an EnKoST-C centered on TED Talks. In this process, we enhance the sentence alignment approach using the subtitle time information and bilingual sentence embedding information. As a result, we built a 559-h English-Korean speech translation corpus. The proposed sentence alignment approach showed excellent performance of 0.96 f-measure score. We also show the baseline performance of an English-Korean speech translation model trained with EnKoST-C. The EnKoST-C is freely available on a Korean government open data hub site.

COMPUTER AND INTERNET RESOURCES FOR PRONUNCIATION AND PHONETICS TEACHING

  • Makarova, Veronika
    • Proceedings of the KSPS conference
    • /
    • 2000.07a
    • /
    • pp.338-349
    • /
    • 2000
  • Pronunciation teaching is once again coming into the foreground of ELT. Japan is, however, lagging far behind many countries in the development of pronunciation curricula and in the actual speech performance of the Japanese learners of English. The reasons for this can be found in the prevalence of communicative methodologies unfavorable for pronunciation teaching, in the lack of trained professionals, and in the large numbers of students in Japanese foreign language classes. This paper offers a way to promote foreign language pronunciation teaching in Japan and other countries by means of employing computer and internet facilities. The paper outlines the major directions of using modem speech technologies in pronunciation classes, like EVF (electronic visual feedback) training at segmental and prosodic levels; automated error detection, testing, grading and fluency assessment. The author discusses the applicability of some specific software packages (CSLU, SUGIspeech, Multispeech, Wavesurfer, etc.) for the needs of pronunciation teaching. Finally, the author talks about the globalization of pronunciation education via internet resources, such as computer corpora and speech and pronunciation training related web pages.

  • PDF

Language Model Adaptation for Conversational Speech Recognition (대화체 연속음성 인식을 위한 언어모델 적응)

  • Park Young-Hee;Chung Minhwa
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.83-86
    • /
    • 2003
  • This paper presents our style-based language model adaptation for Korean conversational speech recognition. Korean conversational speech is observed various characteristics of content and style such as filled pauses, word omission, and contraction as compared with the written text corpora. For style-based language model adaptation, we report two approaches. Our approaches focus on improving the estimation of domain-dependent n-gram models by relevance weighting out-of-domain text data, where style is represented by n-gram based tf*idf similarity. In addition to relevance weighting, we use disfluencies as predictor to the neighboring words. The best result reduces 6.5% word error rate absolutely and shows that n-gram based relevance weighting reflects style difference greatly and disfluencies are good predictor.

  • PDF

Fillers in the Hong Kong Corpus of Spoken English (HKCSE)

  • Seto, Andy
    • Asia Pacific Journal of Corpus Research
    • /
    • v.2 no.1
    • /
    • pp.13-22
    • /
    • 2021
  • The present study employed an analytical framework that is characterised by a synthesis of quantitative and qualitative analyses with a specially designed computer software SpeechActConc to examine speech acts in business communication. The naturally occurring data from the audio recordings and the prosodic transcriptions of the business sub-corpora of the HKCSE (prosodic) are manually annotated with a speech act taxonomy for finding out the frequency of fillers, the co-occurring patterns of fillers with other speech acts, and the linguistic realisations of fillers. The discoursal function of fillers to sustain the discourse or to hold the floor has diverse linguistic realisations, ranging from a sound (e.g. 'uhuh') and a word (e.g. 'well') to sounds (e.g. 'um er') and words, namely phrase ('sort of') and clause (e.g. 'you know'). Some are even combinations of sound(s) and word(s) (e.g. 'and um', 'yes er um', 'sort of erm'). Among the top five frequent linguistic realisations of fillers, 'er' and 'um' are the most common ones found in all the six genres with relatively higher percentages of occurrence. The remaining more frequent realisations consist of clause ('you know'), word ('yeah') and sound ('erm'). These common forms are syntactically simpler than the less frequent realisations found in the genres. The co-occurring patterns of fillers and other speech acts are diverse. The more common co-occurring speech acts with fillers include informing and answering. The findings show that fillers are not only frequently used by speakers in spontaneous conversation but also mostly represented in sounds or non-linguistic realisations.

A Hybrid Sentence Alignment Method for Building a Korean-English Parallel Corpus (한영 병렬 코퍼스 구축을 위한 하이브리드 기반 문장 자동 정렬 방법)

  • Park, Jung-Yeul;Cha, Jeong-Won
    • MALSORI
    • /
    • v.68
    • /
    • pp.95-114
    • /
    • 2008
  • The recent growing popularity of statistical methods in machine translation requires much more large parallel corpora. A Korean-English parallel corpus, however, is not yet enoughly available, little research on this subject is being conducted. In this paper we present a hybrid method of aligning sentences for Korean-English parallel corpora. We use bilingual news wire web pages, reading comprehension materials for English learners, computer-related technical documents and help files of localized software for building a Korean-English parallel corpus. Our hybrid method combines sentence-length based and word-correspondence based methods. We show the results of experimentation and evaluate them. Alignment results from using a full translation model are very encouraging, especially when we apply alignment results to an SMT system: 0.66% for BLEU score and 9.94% for NIST score improvement compared to the previous method.

  • PDF

Phonological processes of vowels from orthographic to pronounced words in the Buckeye Corpus by sex and age groups

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.25-31
    • /
    • 2018
  • This paper investigated the phonological processes of monophthongs and diphthongs in the pronounced words present in the Buckeye Corpus and compared the frequency distribution of these processes by sex and age groups to provide a clearer understanding of spoken English to linguists and phoneticians. Both orthographic and pronounced words were extracted from the transcribed label scripts of the Buckeye Corpus using R. Next, the phonological processes of monophthongs and diphthongs in the orthographic and pronounced labels were tabulated using R scripts, and a frequency distribution by vowel process types, as well as sex and age groups, was created. The results revealed that 95% of the orthographic words contained the same number of syllables, whereas 5% had different numbers of vowels, thereby proving that speakers tend to preserve vowels in spontaneous speech. In addition, deletion processes were preferred in natural speech. Most vowel deletions occurred with an unstressed syllable. Chi-square tests were performed to calculate dependence in the distribution of phonological process types for male and female groups and young and old groups. The results showed a very strong correlation. This finding indicates that vowel processes occurred in approximately the same pattern in natural and spontaneous speech data regardless of sex and age, as well as whether or not the vowel processes were identical. Based on these results, the author concludes that an analysis of phonological processes in spontaneous speech corpora can greatly enhance practical understanding of spoken English.