• Title/Summary/Keyword: spoken language corpus

Search Result 48, Processing Time 0.027 seconds

Development of Korean dataset for joint intent classification and slot filling (발화 의도 예측 및 슬롯 채우기 복합 처리를 위한 한국어 데이터셋 개발)

  • Han, Seunggyu;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.1
    • /
    • pp.57-63
    • /
    • 2021
  • Spoken language understanding, which aims to understand utterance as naturally as human would, are mostly focused on English language. In this paper, we construct a Korean language dataset for spoken language understanding, which is based on a conversational corpus between reservation system and its user. The domain of conversation is limited to restaurant reservation. There are 7 types of slot tags and 5 types of intent tags in 6857 sentences. When a model proposed in English-based research is trained with our dataset, intent classification accuracy decreased a little, while slot filling F1 score decreased significantly.

Teaching Grammar for Spoken Korean to English-speaking Learners: Reported Speech Marker '-dae'. (영어권 학습자를 위한 한국어 구어 문법 교육 - 보고 표지 '-대'를 중심으로 -)

  • Kim, Young A;Cho, In Jung
    • Journal of Korean language education
    • /
    • v.23 no.1
    • /
    • pp.1-23
    • /
    • 2012
  • The development of corpus in recent years has attracted increased research on spoken Korean. Nevertheless, these research outcomes are yet to be meaningfully and adequately reflected in Korean language textbooks. The reported speech marker '-dae' is one of these areas that need more attention. This study investigates whether or not in textbooks '-dae' is clearly explained to English-speaking learners to prevent confusion and misuse. Based on a contrastive analysis of Korean and English, this study argues three points: Firstly, '-dae' should be introduced to Korean learners as an independent sentence ender rather than a contracted form of '-dago hae'. Secondly, it is necessary to teach English-speaking learners that '-dae' is not equivalent to the English report speech form. It functions more or less as a third person marker in Korean. Learners should be informed that '-dae' is used for statements in English, if those statements were hearsay but the source of information does not need to be specified. This is a very distinctive difference between Korean and English and should be emphasized in class when 'dae' is taught. Thirdly, '-dae' should be introduced before indirect speech constructions, because it is mainly used in simple statements and the frequency of '-dae' is very high in spoken Korean.

English-Korean speech translation corpus (EnKoST-C): Construction procedure and evaluation results

  • Jeong-Uk Bang;Joon-Gyu Maeng;Jun Park;Seung Yun;Sang-Hun Kim
    • ETRI Journal
    • /
    • v.45 no.1
    • /
    • pp.18-27
    • /
    • 2023
  • We present an English-Korean speech translation corpus, named EnKoST-C. End-to-end model training for speech translation tasks often suffers from a lack of parallel data, such as speech data in the source language and equivalent text data in the target language. Most available public speech translation corpora were developed for European languages, and there is currently no public corpus for English-Korean end-to-end speech translation. Thus, we created an EnKoST-C centered on TED Talks. In this process, we enhance the sentence alignment approach using the subtitle time information and bilingual sentence embedding information. As a result, we built a 559-h English-Korean speech translation corpus. The proposed sentence alignment approach showed excellent performance of 0.96 f-measure score. We also show the baseline performance of an English-Korean speech translation model trained with EnKoST-C. The EnKoST-C is freely available on a Korean government open data hub site.

Addressing Low-Resource Problems in Statistical Machine Translation of Manual Signals in Sign Language (말뭉치 자원 희소성에 따른 통계적 수지 신호 번역 문제의 해결)

  • Park, Hancheol;Kim, Jung-Ho;Park, Jong C.
    • Journal of KIISE
    • /
    • v.44 no.2
    • /
    • pp.163-170
    • /
    • 2017
  • Despite the rise of studies in spoken to sign language translation, low-resource problems of sign language corpus have been rarely addressed. As a first step towards translating from spoken to sign language, we addressed the problems arising from resource scarcity when translating spoken language to manual signals translation using statistical machine translation techniques. More specifically, we proposed three preprocessing methods: 1) paraphrase generation, which increases the size of the corpora, 2) lemmatization, which increases the frequency of each word in the corpora and the translatability of new input words in spoken language, and 3) elimination of function words that are not glossed into manual signals, which match the corresponding constituents of the bilingual sentence pairs. In our experiments, we used different types of English-American sign language parallel corpora. The experimental results showed that the system with each method and the combination of the methods improved the quality of manual signals translation, regardless of the type of the corpora.

Robust Syntactic Annotation of Corpora and Memory-Based Parsing

  • Hinrichs, Erhard W.
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2002.02a
    • /
    • pp.1-1
    • /
    • 2002
  • This talk provides an overview of current work in my research group on the syntactic annotation of the T bingen corpus of spoken German and of the German Reference Corpus (Deutsches Referenzkorpus: DEREKO) of written texts. Morpho-syntactic and syntactic annotation as well as annotation of function-argument structure for these corpora is performed automatically by a hybrid architecture that combines robust symbolic parsing with finite-state methods ("chunk parsing" in the sense Abney) with memory-based parsing (in the sense of Daelemans). The resulting robust annotations can be used by theoretical linguists, who lire interested in large-scale, empirical data, and by computational linguists, who are in need of training material for a wide range of language technology applications. To aid retrieval of annotated trees from the treebank, a query tool VIQTORYA with a graphical user interface and a logic-based query language has been developed. VIQTORYA allows users to query the treebanks for linguistic structures at the word level, at the level of individual phrases, and at the clausal level.

  • PDF

Modal Auxiliary Verbs in Japanese EFL Learners' Conversation: A Corpus-based Study

  • Nakayama, Shusaku
    • Asia Pacific Journal of Corpus Research
    • /
    • v.2 no.1
    • /
    • pp.23-34
    • /
    • 2021
  • This research examines Japanese non-native speakers' (JNNS) modal auxiliary verb use from two different perspectives: frequency of use and preferences for modalities. Additionally, error analysis is carried out to identify errors in modal use common among JNNSs. Their modal use is compared to that of English native speakers within a spoken dialogue corpus which is part of the International Corpus Network of Asian Learners' English. Research findings show at a statistically significant level that when compared to native speakers, JNNSs underuse past forms of modals and infrequently convey epistemic modality, indicating the possibility that JNNSs fail to express their opinions or thoughts indirectly when needed or to convey politeness appropriately. Error analysis identifies the following three types of common errors: (1) the use of incorrect tenses of modal verb phrases, (2) the use of inflected verb forms after modals, and (3) the non-use of main verbs after modals. The first type of error is largely because JNNSs do not master how to express past meanings of modals. The second and third types of errors seem to be due to first language transfer into second language acquisition and JNNSs' overgeneralization of the subject-verb agreement rules to modals respectively.

An Intonation Study of Predicate ending in Current Korean - From final endings of ${\ulcorner}$-a/e, $t{\int}ijo$${\lrcorner}$ and ${\ulcorner}$p/simnida${\lrcorner}$ - (현대 서울말 평서문에 나타나는 억양 연구 - 어말어미 "-아/어, -지요" 와 "-ㅂ/습니다" 를 중심으로 -)

  • Yu, Ki-Won
    • Proceedings of the KSPS conference
    • /
    • 2005.04a
    • /
    • pp.3-7
    • /
    • 2005
  • This research is for finding prototypes and characteristics of intonation found in ${\ulcorner}$-a/e, $t{\int}ijo$<${\lrcorner}$ and ${\ulcorner}$p/simnida${\lrcorner}$ among modern Korean predicate statements by constructing spoken corpus based on the current radio broadcast. So the result of the study is as follows. : (1) The construction of the balanced spoken corpus and the standard for boundary determination of rhythm are needed for the intonation model of speech synthesis. (2) Korean intonation units have the splited word tone which includes the nuclear tone and the pre-nuclear tone makes unclear tone more detailed. (3) I made man and woman intonation models individually through t-test of SPSS. (4) The standard intonation model is devided '-ajo'type and '-nida'type

  • PDF

Formulaic Language Development in Asian Learners of English: A Comparative Study of Phrase-frames in Written and Oral Production

  • Yoon Namkung;Ute Romer
    • Asia Pacific Journal of Corpus Research
    • /
    • v.4 no.2
    • /
    • pp.1-39
    • /
    • 2023
  • Recent research in usage-based Second Language Acquisition has provided new insights into second language (L2) learners' development of formulaic language (Wulff, 2019). The current study examines the use of phrase-frames, which are recurring sequences of words including one or more variable slots (e.g., it is * that), in written and oral production data from Asian learners of English across four proficiency levels (beginner, low-intermediate, high-intermediate, advanced) and native English speakers. The variability, predictability, and discourse functions of the most frequent 4-word phrase-frames from the written essay and spoken dialogue sub-corpora of the International Corpus Network of Asian Learners of English (ICNALE) were analyzed and then compared across groups and modes. The results revealed that while learners' phrase-frames in writing became more variable and unpredictable as proficiency increased, no clear developmental patterns were found in speaking, although all groups used more fixed and predictable phrase-frames than the reference group. Further, no developmental trajectories in the functions of the most frequent phrase-frames were found in both modes. Additionally, lower-level learners and the reference group used more variable phrase-frames in speaking, whereas advanced-level learners showed more variability in writing. This study contributes to a better understanding of the development of L2 phraseological competence.

A Study in Design and Construction of Structured Documents for Dialogue Corpus (대화형 코퍼스의 설계 및 구조적 문서화에 관한 연구)

  • Kang Chang-Qui;Nam Myung-Woo;Yang Ok-Yul
    • The Journal of the Korea Contents Association
    • /
    • v.4 no.4
    • /
    • pp.1-10
    • /
    • 2004
  • Dialogue speech corpora that contain sufficient dialogue speech features are needed for performance assessment of a spoken language dialogue system. And labeling information of dialogue speech corpora plays an important role for improvement of recognition rate in acoustic and language models. In this paper, we examine the methods by which labeling information of dialogue speech corpora can be structured. More specifically, we examined how to represent features of dialogue speech in a structured document based XML and how to design the repository system of the information.

  • PDF

Vowel Context Effect on the Perception of Stop Consonants in Malayalam and Its Role in Determining Syllable Frequency

  • Mohan, Dhanya;Maruthy, Sandeep
    • Journal of Audiology & Otology
    • /
    • v.25 no.3
    • /
    • pp.124-130
    • /
    • 2021
  • Background and Objectives: The study investigated vowel context effects on the perception of stop consonants in Malayalam. It also probed into the role of vowel context effects in determining the frequency of occurrence of various consonant-vowel (CV) syllables in Malayalam. Subjects and Methods: The study used a cross-sectional pre-experimental post-test only research design on 30 individuals with normal hearing, who were native speakers of Malayalam. The stimuli included three stop consonants, each spoken in three different vowel contexts. The resultant nine syllables were presented in original form and five gating conditions. The consonant recognition in different vowel contexts of the participants was assessed. The frequency of occurrence of the nine target syllables in the spoken corpus of Malayalam was also systematically derived. Results: The consonant recognition score was better in the /u/ vowel context compared with /i/ and /a/ contexts. The frequency of occurrence of the target syllables derived from the spoken corpus of Malayalam showed that the three stop consonants occurred more frequently with the vowel /a/ compared with /u/ and /i/. Conclusions: The findings show a definite vowel context effect on the perception of the Malayalam stop consonants. This context effect observed is different from that in other languages. Stop consonants are perceived better in the context of /u/ compared with the /a/ and /i/ contexts. Furthermore, the vowel context effects do not appear to determine the frequency of occurrence of different CV syllables in Malayalam.