• Title/Summary/Keyword: speech corpora

Search Result 39, Processing Time 0.025 seconds

The phoneme segmentatioi with MLP-based postprocessor on speech synthesis corpora (합성용 운율 DB 구축에서의 MLP 기반 후처리가 포함된 음소분할)

  • 박은영
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.344-349
    • /
    • 1998
  • 음성/언어학적 및 음성의 과학적 연구를 위해서는 대량의 음소 단위 분절 레이블링된 데이터베이스 구축이 필수적이다. 따라서, 본 논문은 음성 합성용 DB 의 구축 및 합성 단위 자동 생성 연구의 일환으로 자동 음소 분할기의 경계오류를 보상할 목적으로 MLP 기반 호처리기가 포함된 음소 분할 방식을 제안한다. 최근 자동 음소 분할기의 성능 향상으로 자동 분절 결과를 이용하여 음성 합성용 운율 DB를 작성하고 있으나, 여전히 경계오류를 수정하지 않고서는 합성 단위로 직접 사용하기 어렵다. 이로 인해 보다 개선된 자동 분절 기술이 요구된다. 따라서, 본 논문에서는 음성에 내제된 음향적 특징을 다층 신경회로망으로 학습하고, 자동 분절기 오류의 통계 특성을 이용하여 자동 분절 경계 수정에 용이한 방식을 제안한다. 고립단어로 발성된 합성 데이터베이스에서, 제안된 후처리기를 도입 후, 기존 자동 분절 시스템이 분할율에 비해 약 25% 의 향상된 성능을 보였으며, 절대 오류는 약 39%가 향상되었다.

  • PDF

Current States and Future Plans for Speech Corpora at SITEC (음성정보기술산업지원센터의 음성 코퍼스 구축 현황 및 계획)

  • Kim Bong-Wan;Lee Yong-Ju
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.49-52
    • /
    • 2002
  • 최근 컴퓨터와 인간간의 대화 수단으로 음성을 활용하는 기술인 음성정보기술이 발달함에 따라 대어휘 연속 음성 인식 및 무제한 어휘 음성 합성의 고도화를 위한 연구가 진행되고 있다. 음성합성의 경우에도 최근 대형의 음성 데이터 베이스로부터 임의 길이의 음성 부분을 골라내어 접속함으로써 좋은 합성 품질을 얻고 있다. 따라서 이러한 연구에 사용될 음성 코퍼스에 관한 요구와 관심이 높아지고 있다. 본 논문에서는 음성정보기술산업지원센터(SITEC)에서 구축중인 음성 코퍼스의 현황과 향후 계획에 관하여 보고한다. 방음실환경에서의 인식 및 합성 연구용 코퍼스, 아동용 음성 코퍼스, Dictation용 음성 코퍼스, 자동차내 소음 및 음성 코퍼스 등의 구축 내용이 소개된다.

  • PDF

A Corpus-based Analysis of EFL Learners' Use of Hedges in Cross-cultural Communication

  • Min, Su-Jung
    • English Language & Literature Teaching
    • /
    • v.16 no.4
    • /
    • pp.91-106
    • /
    • 2010
  • This study examines the use of hedges in cross-cultural communication between EFL learners in an e-learning environment. The study analyzes the use of hedges in a corpus of an interactive web with a bulletin board system through which college students of English at Japanese and Korean universities interacted with each other discussing the topics of local and global issues. It compares the use of hedges in the students' corpus to that of a native English speakers' corpus. The result shows that EFL learners tend to use relatively smaller number of hedges than the native speakers in terms of the frequencies of the total tokens. It further reveals that the learners' overuse of a single versatile high-frequency hedging item, I think, results in relative underuse of other hedging devices. This indicates that due to their small repertoire of hedges, EFL learners' overuse of a limited number of hedging items may cause their speech or writing to become less competent. Based on the result and interviews with the learners, the study also argues that hedging should be understood in its social contexts and should not be understood just as a lack of conviction or a mark of low proficiency. Suggestions were made for using computer corpora in understanding EFL learners' language difficulties and helping them develop communicative and pragmatic competence.

  • PDF

Building an Annotated English-Vietnamese Parallel Corpus for Training Vietnamese-related NLPs

  • Dien Dinh;Kiem Hoang
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.103-109
    • /
    • 2004
  • In NLP (Natural Language Processing) tasks, the highest difficulty which computers had to face with, is the built-in ambiguity of Natural Languages. To disambiguate it, formerly, they based on human-devised rules. Building such a complete rule-set is time-consuming and labor-intensive task whilst it doesn't cover all the cases. Besides, when the scale of system increases, it is very difficult to control that rule-set. So, recently, many NLP tasks have changed from rule-based approaches into corpus-based approaches with large annotated corpora. Corpus-based NLP tasks for such popular languages as English, French, etc. have been well studied with satisfactory achievements. In contrast, corpus-based NLP tasks for Vietnamese are at a deadlock due to absence of annotated training data. Furthermore, hand-annotation of even reasonably well-determined features such as part-of-speech (POS) tags has proved to be labor intensive and costly. In this paper, we present our building an annotated English-Vietnamese parallel aligned corpus named EVC to train for Vietnamese-related NLP tasks such as Word Segmentation, POS-tagger, Word Order transfer, Word Sense Disambiguation, English-to-Vietnamese Machine Translation, etc.

  • PDF

A study on the voiceless plosives from the English and Korean spontaneous speech corpus (영어와 한국어 자연발화 음성 코퍼스에서의 무성 파열음 연구)

  • Yoon, Kyuchul
    • Phonetics and Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.45-53
    • /
    • 2019
  • The purpose of this work was to examine the factors affecting the identities of the voiceless plosives, i.e. English [p, t, k] and Korean [ph, th, kh], from the spontaneous speech corpora. The factors were automatically extracted by a Praat script and the percent correctness of the discriminant analyses was incrementally assessed by increasing the number of factors used in predicting the identities of the plosives. The factors included the spectral moments and tilts of the plosive release bursts, the post-burst aspirations and the vowel onsets, the durations such as the closure durations and the voice onset times (VOTs), the locations within words and utterances and the identities of the following vowels. The results showed that as the number of factors increased up to five, so did the percent correctness of the analyses, resulting in 74.6% for English and 66.4% for Korean. However, the optimal number of factors for the maximum percent correctness was four, i.e. the spectral moments and tilts of the release bursts and the following vowels, the closure durations and the VOTs. This suggests that the identities of the voiceless plosives are mostly determined by their internal and vowel onset cues.

Implementation of Iconic Language for the Language Support System of the Language Disorders (언어 장애인의 언어보조 시스템을 위한 아이콘 언어의 구현)

  • Choo Kyo-Nam;Woo Yo-Seob;Min Hong-Ki
    • The KIPS Transactions:PartB
    • /
    • v.13B no.4 s.107
    • /
    • pp.479-488
    • /
    • 2006
  • The iconic language interlace is designed to provide more convenient environments for communication to the target system than the keyboard-based interface. For this work, tendencies and features of vocabulary are analyzed in conversation corpora constructed from the corresponding domains with high degree of utilization, and the meaning and vocabulary system of iconic language are constructed through application of natural language processing methodologies such as morphological, syntactic and semantic analyses. The part of speech and grammatical rules of iconic language are defined in order to make the situation corresponding the icon to the vocabulary and meaning of the Korean language and to communicate through icon sequence. For linguistic ambiguity resolution which may occur in the iconic language and for effective semantic processing, semantic data focused on situation of the iconic language are constructed from the general purpose Korean semantic dictionary and subcategorization dictionary. Based on them, the Korean language generation from the iconic interface in semantic domain is suggested.

Korean Word Segmentation and Compound-noun Decomposition Using Markov Chain and Syllable N-gram (마코프 체인 밀 음절 N-그램을 이용한 한국어 띄어쓰기 및 복합명사 분리)

  • 권오욱
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.274-284
    • /
    • 2002
  • Word segmentation errors occurring in text preprocessing often insert incorrect words into recognition vocabulary and cause poor language models for Korean large vocabulary continuous speech recognition. We propose an automatic word segmentation algorithm using Markov chains and syllable-based n-gram language models in order to correct word segmentation error in teat corpora. We assume that a sentence is generated from a Markov chain. Spaces and non-space characters are generated on self-transitions and other transitions of the Markov chain, respectively Then word segmentation of the sentence is obtained by finding the maximum likelihood path using syllable n-gram scores. In experimental results, the algorithm showed 91.58% word accuracy and 96.69% syllable accuracy for word segmentation of 254 sentence newspaper columns without any spaces. The algorithm improved the word accuracy from 91.00% to 96.27% for word segmentation correction at line breaks and yielded the decomposition accuracy of 96.22% for compound-noun decomposition.

Pivot Discrimination Approach for Paraphrase Extraction from Bilingual Corpus (이중 언어 기반 패러프레이즈 추출을 위한 피봇 차별화 방법)

  • Park, Esther;Lee, Hyoung-Gyu;Kim, Min-Jeong;Rim, Hae-Chang
    • Korean Journal of Cognitive Science
    • /
    • v.22 no.1
    • /
    • pp.57-78
    • /
    • 2011
  • Paraphrasing is the act of writing a text using other words without altering the meaning. Paraphrases can be used in many fields of natural language processing. In particular, paraphrases can be incorporated in machine translation in order to improve the coverage and the quality of translation. Recently, the approaches on paraphrase extraction utilize bilingual parallel corpora, which consist of aligned sentence pairs. In these approaches, paraphrases are identified, from the word alignment result, by pivot phrases which are the phrases in one language to which two or more phrases are connected in the other language. However, the word alignment is itself a very difficult task, so there can be many alignment errors. Moreover, the alignment errors can lead to the problem of selecting incorrect pivot phrases. In this study, we propose a method in paraphrase extraction that discriminates good pivot phrases from bad pivot phrases. Each pivot phrase is weighted according to its reliability, which is scored by considering the lexical and part-of-speech information. The experimental result shows that the proposed method achieves higher precision and recall of the paraphrase extraction than the baseline. Also, we show that the extracted paraphrases can increase the coverage of the Korean-English machine translation.

  • PDF

PPEditor: Semi-Automatic Annotation Tool for Korean Dependency Structure (PPEditor: 한국어 의존구조 부착을 위한 반자동 말뭉치 구축 도구)

  • Kim Jae-Hoon;Park Eun-Jin
    • The KIPS Transactions:PartB
    • /
    • v.13B no.1 s.104
    • /
    • pp.63-70
    • /
    • 2006
  • In general, a corpus contains lots of linguistic information and is widely used in the field of natural language processing and computational linguistics. The creation of such the corpus, however, is an expensive, labor-intensive and time-consuming work. To alleviate this problem, annotation tools to build corpora with much linguistic information is indispensable. In this paper, we design and implement an annotation tool for establishing a Korean dependency tree-tagged corpus. The most ideal way is to fully automatically create the corpus without annotators' interventions, but as a matter of fact, it is impossible. The proposed tool is semi-automatic like most other annotation tools and is designed to edit errors, which are generated by basic analyzers like part-of-speech tagger and (partial) parser. We also design it to avoid repetitive works while editing the errors and to use it easily and friendly. Using the proposed annotation tool, 10,000 Korean sentences containing over 20 words are annotated with dependency structures. For 2 months, eight annotators have worked every 4 hours a day. We are confident that we can have accurate and consistent annotations as well as reduced labor and time.