• Title/Summary/Keyword: Corpus construction

Search Result 112, Processing Time 0.024 seconds

-eullanjira Construction of the Southwestern Dialect in Korea (서남방언의 '-을란지라' 구문 연구)

  • KIM, Ji-eun
    • Korean Linguistics
    • /
    • v.74
    • /
    • pp.1-24
    • /
    • 2017
  • This paper investigated -eullanjira sentence as a kind of construction of the Southwestern dialect in Korea. Five informants were selected to form the main corpus of -eullanjira. Through analyzing the corpus, its semantic, syntactic and morphological characteristics were figured out. Firstly, a view of construction grammar was adopted to capture the semantic and syntactic characteristics of -eullanjira. The construction of -eullanjira was established as "Xdo Yeullanjira Z". Syntactically, -do was found to be a common auxiliary particle, which allowed nouns, adverbs, verbs and adjectives to appear at the position of X, while only verbs and adjectives could appear at the position of Y. Subject-honorific, causative and passive prefinal endings could coexist with Y, while tense and modal prefinal endings could not. Z was an embedded clause, which had the semantic feature of [-DOUBT], meaning 'it should be done undoubtedly'. The formation of -eullanjira was next examined both diachronically and synchronically. It was found there was a conjuntive ending of Middle Korean, corresponding -eullanjira, namely, -landai. Finally, -eullanjira was newly analyzed as [[-eulla-]+[-n-ji-ra]].

Compilation of the Yonsei English Learner Corpus (YELC) 2011 and Its Use for Understanding Current Usage of English by Korean Pre-university Students (한국 예비 대학생의 영어 사용 특성 파악을 위한 대규모 공개 영어 학습자 코퍼스 구축 및 분석)

  • Rhee, Seok-Chae;Jung, Chae Kwan
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.1019-1029
    • /
    • 2014
  • In recent years, researchers have become increasingly interested in the creation and pedagogical use of English learner corpora. Many studies have shown that learner corpora can not only make a significant contribution to second language acquisition research but also contribute to the construction and evaluation of language tests by advancing our understanding of English learners. So far, however, little attention has been paid to the Korean EFL (English as a foreign language) learners' corpus. The Yonsei English Learner Corpus (YELC 2011) is a specialized, monolingual, and synchronic Korean EFL learner corpus that was developed by Yonsei University from 2011 to 2012. Over 3,000 Korean high school graduates (or equivalents) who were accepted by Yonsei University for their further studies participated in this project. It consists of 6,572 written texts (1,085,828 words) at nine different English proficiency levels. In this paper, we describe its compilation, and more specifically, how we have corpusized from a text archive to a corpus. After introducing the process of corpusization, we report arresting insights into the specific linguistic features that different proficiency levels of Korean learners of English have. This study also discusses the potential use of the YELC 2011 which is now freely available for research purposes.

Named Entity Recognition Using Distant Supervision and Active Bagging (원거리 감독과 능동 배깅을 이용한 개체명 인식)

  • Lee, Seong-hee;Song, Yeong-kil;Kim, Hark-soo
    • Journal of KIISE
    • /
    • v.43 no.2
    • /
    • pp.269-274
    • /
    • 2016
  • Named entity recognition is a process which extracts named entities in sentences and determines categories of the named entities. Previous studies on named entity recognition have primarily been used for supervised learning. For supervised learning, a large training corpus manually annotated with named entity categories is needed, and it is a time-consuming and labor-intensive job to manually construct a large training corpus. We propose a semi-supervised learning method to minimize the cost needed for training corpus construction and to rapidly enhance the performance of named entity recognition. The proposed method uses distance supervision for the construction of the initial training corpus. It can then effectively remove noise sentences in the initial training corpus through the use of an active bagging method, an ensemble method of bagging and active learning. In the experiments, the proposed method improved the F1-score of named entity recognition from 67.36% to 76.42% after active bagging for 15 times.

Study on Difference of Wordvectors Analysis Induced by Text Preprocessing for Deep Learning (딥러닝을 위한 텍스트 전처리에 따른 단어벡터 분석의 차이 연구)

  • Ko, Kwang-Ho
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.5
    • /
    • pp.489-495
    • /
    • 2022
  • It makes difference to LSTM D/L(Deep Learning) results for language model construction as the corpus preprocess changes. An LSTM model was trained with a famouse literaure poems(Ki Hyung-do's work) for training corpus in the study. You get the two wordvector sets for two corpus sets of the original text and eraised word ending text each once D/L training completed. It's been inspected of the similarity/analogy operation results, the positions of the wordvectors in 2D plane and the generated texts by the language models for the two different corpus sets. The suggested words by the silmilarity/analogy operations are changed for the corpus sets but they are related well considering the corpus characteristics as a literature work. The positions of the wordvectors are different for each corpus sets but the words sustained the basic meanings and the generated texts are different for each corpus sets also but they have the taste of the original style. It's supposed that the D/L language model can be a useful tool to enjoy the literature in object and in diverse with the analysis results shown in the study.

English Predicate Inversion: Towards Data-driven Learning

  • Kim, Jong-Bok;Kim, Jin-Young
    • Journal of English Language & Literature
    • /
    • v.56 no.6
    • /
    • pp.1047-1065
    • /
    • 2010
  • English inversion constructions are not only hard for non-native speakers to learn but also difficult to teach mainly because of their intriguing grammatical and discourse properties. This paper addresses grammatical issues in learning or teaching the so-called 'predicate inversion (PI)' construction (e.g., Equally important in terms of forest depletion is the continuous logging of the forests). In particular, we chart the grammatical (distributional, syntactic, semantic, pragmatic) properties of the PI construction, and argue for adata-driven teaching for English grammar. To depart from the arm-chaired style of grammar teaching (relying on author-made simple sentences), our teaching method introduces a datadriven teaching. With total 25 university students in a grammar-related class, students together have analyzed the British Component of the International Corpus of English (ICE-GB), containing about one million words distributed across a variety of textual categories. We have identified total 290 PI sentences (206 from spoken and 87 from written texts). The preposed syntactic categories of the PI involve five main types: AdvP, PP, VP(ed/ing), NP, AP, and so, all of which function as the complement of the copula. In terms of discourse, we have observed, supporting Birner and Ward's (1998) observation that these preposed phrases represent more familiar information than the postposed subject. The corpus examples gave us the three possible types: The preposed element is discourse-old whereas the postposed one is discourse-new as in Putting wire mesh over a few bricks is a good idea. Both preposed and postposed elements can also be discourse new as in But a fly in the ointment is inflation. These two elements can also be discourse old as in Racing with him on the near-side is Rinus. The dominant occurrence of the PI in the spoken texts also supports the view that the balance (or scene-setting) in information structure is the main trigger for the use of the PI construction. After being exposed to the real data and in-depth syntactic as well as informationstructure analysis of the PI construction, it is proved that the class students have had a farmore clear understanding of the construction in question and have realized that grammar does not mean to live on by itself but tightly interacts with other important grammatical components such as information structure. The study directs us toward both a datadriven and interactive grammar teaching.

English-Korean speech translation corpus (EnKoST-C): Construction procedure and evaluation results

  • Jeong-Uk Bang;Joon-Gyu Maeng;Jun Park;Seung Yun;Sang-Hun Kim
    • ETRI Journal
    • /
    • v.45 no.1
    • /
    • pp.18-27
    • /
    • 2023
  • We present an English-Korean speech translation corpus, named EnKoST-C. End-to-end model training for speech translation tasks often suffers from a lack of parallel data, such as speech data in the source language and equivalent text data in the target language. Most available public speech translation corpora were developed for European languages, and there is currently no public corpus for English-Korean end-to-end speech translation. Thus, we created an EnKoST-C centered on TED Talks. In this process, we enhance the sentence alignment approach using the subtitle time information and bilingual sentence embedding information. As a result, we built a 559-h English-Korean speech translation corpus. The proposed sentence alignment approach showed excellent performance of 0.96 f-measure score. We also show the baseline performance of an English-Korean speech translation model trained with EnKoST-C. The EnKoST-C is freely available on a Korean government open data hub site.

Analyzing Vocabulary Characteristics of Colloquial Style Corpus and Automatic Construction of Sentiment Lexicon (구어체 말뭉치의 어휘 사용 특징 분석 및 감정 어휘 사전의 자동 구축)

  • Kang, Seung-Shik;Won, HyeJin;Lee, Minhaeng
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.144-151
    • /
    • 2020
  • In a mobile environment, communication takes place via SMS text messages. Vocabularies used in SMS texts can be expected to use vocabularies of different classes from those used in general Korean literary style sentence. For example, in the case of a typical literary style, the sentence is correctly initiated or terminated and the sentence is well constructed, while SMS text corpus often replaces the component with an omission and a brief representation. To analyze these vocabulary usage characteristics, the existing colloquial style corpus and the literary style corpus are used. The experiment compares and analyzes the vocabulary use characteristics of the colloquial corpus SMS text corpus and the Naver Sentiment Movie Corpus, and the written Korean written corpus. For the comparison and analysis of vocabulary for each corpus, the part of speech tag adjective (VA) was used as a standard, and a distinctive collexeme analysis method was used to measure collostructural strength. As a result, it was confirmed that adjectives related to emotional expression such as'good-','sorry-', and'joy-' were preferred in the SMS text corpus, while adjectives related to evaluation expressions were preferred in the Naver Sentiment Movie Corpus. The word embedding was used to automatically construct a sentiment lexicon based on the extracted adjectives with high collostructural strength, and a total of 343,603 sentiment representations were automatically built.

A Corpus Construction System of Consistent Document Categorization and Keyword Extraction (일관성 있는 문서분류 및 키워드 추출을 위한 말뭉치 구축도구)

  • Jeong, Jae-Cheol;Park, So-Young;Chang, Ju-No;Kihl, Tae-Suk
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.675-676
    • /
    • 2010
  • As the number of documents rapidly increases in the web environment, the efficient document classification approaches have been required to retrieve the desired information from too many documents. In this paper, we propose a corpus construction tool to annotate document classification information such as category, keywords, and usage to each product description document. The proposed tool can help a human annotator to correctly identify this information by providing the verification step to check the input results of other human annotators. Also, the human annotator can construct the corpus anytime anywhere by using the web-based proposed system.

  • PDF

A Spelling Error Correction Model in Korean Using a Correction Dictionary and a Newspaper Corpus (교정사전과 신문기사 말뭉치를 이용한 한국어 철자 오류 교정 모델)

  • Lee, Se-Hee;Kim, Hark-Soo
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.427-434
    • /
    • 2009
  • With the rapid evolution of the Internet and mobile environments, text including spelling errors such as newly-coined words and abbreviated words are widely used. These spelling errors make it difficult to develop NLP (natural language processing) applications because they decrease the readability of texts. To resolve this problem, we propose a spelling error correction model using a spelling error correction dictionary and a newspaper corpus. The proposed model has the advantage that the cost of data construction are not high because it uses a newspaper corpus, which we can easily obtain, as a training corpus. In addition, the proposed model has an advantage that additional external modules such as a morphological analyzer and a word-spacing error correction system are not required because it uses a simple string matching method based on a correction dictionary. In the experiments with a newspaper corpus and a short message corpus collected from real mobile phones, the proposed model has been shown good performances (a miss-correction rate of 7.3%, a F1-measure of 97.3%, and a false positive rate of 1.1%) in the various evaluation measures.

Semi-Automatic Annotation Tool to Build Large Dependency Tree-Tagged Corpus

  • Park, Eun-Jin;Kim, Jae-Hoon;Kim, Chang-Hyun;Kim, Young-Kill
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.385-393
    • /
    • 2007
  • Corpora annotated with lots of linguistic information are required to develop robust and statistical natural language processing systems. Building such corpora, however, is an expensive, labor-intensive, and time-consuming work. To help the work, we design and implement an annotation tool for establishing a Korean dependency tree-tagged corpus. Compared with other annotation tools, our tool is characterized by the following features: independence of applications, localization of errors, powerful error checking, instant annotated information sharing, user-friendly. Using our tool, we have annotated 100,904 Korean sentences with dependency structures. The number of annotators is 33, the average annotation time is about 4 minutes per sentence, and the total period of the annotation is 5 months. We are confident that we can have accurate and consistent annotations as well as reduced labor and time.

  • PDF