• Title/Summary/Keyword: corpus tools

Search Result 25, Processing Time 0.019 seconds

Error Correction and Praat Script Tools for the Buckeye Corpus of Conversational Speech (벅아이 코퍼스 오류 수정과 코퍼스 활용을 위한 프랏 스크립트 툴)

  • Yoon, Kyu-Chul
    • Phonetics and Speech Sciences
    • /
    • v.4 no.1
    • /
    • pp.29-47
    • /
    • 2012
  • The purpose of this paper is to show how to convert the label files of the Buckeye Corpus of Spontaneous Speech [1] into Praat format and to introduce some of the Praat scripts that will enable linguists to study various aspects of spoken American English present in the corpus. During the conversion process, several types of errors were identified and corrected either manually or automatically by the use of scripts. The Praat script tools that have been developed can help extract from the corpus massive amounts of phonetic measures such as the VOT of plosives, the formants of vowels, word frequency information and speech rates that span several consecutive words. The script tools can extract additional information concerning the phonetic environment of the target words or allophones.

Citation Practices in Academic Corpora: Implications for EAP Writing

  • Min, Su-Jung
    • English Language & Literature Teaching
    • /
    • v.10 no.3
    • /
    • pp.113-126
    • /
    • 2004
  • Explicit reference to the work of other authors is an essential feature of most academic research writings. Corpus analysis of academic text can reveal much about what writers actually do and why they do so. Application of corpus tools in language education has been well documented by many scholars (Pedersen, 1995, Swales, 1990, Thompson, 2000). They demonstrate how computer technology can assist in the effective analysis of corpus based data. For teaching purposes, tills recent research provides insights in the areas of English for Academe Purposes (EAP). The need for such support is evident when students have to use appropriate citations in their writings. Using Swales' (1990) division of citation forms into integral and non-integral and Thompson and Tnbble's (2001) classification scheme, this paper codifies academic texts in a corpus. The texts are academic research articles from different disciplines. The results lead into a comparison of the citation practices m different disciplines. Finally, it is argued that the information obtained in this study is useful for EAP writing courses in EFL countries.

  • PDF

A review of corpus research trends in Korean education (한국어 교육 관련 국내 코퍼스 연구 동향)

  • Shim, Eunji
    • Asia Pacific Journal of Corpus Research
    • /
    • v.2 no.2
    • /
    • pp.43-48
    • /
    • 2021
  • The aim of this study is to analyze the trends of corpus driven research in Korean education. For this purpose, a total of 14 papers was searched online with the keywords including Korean corpus and Korean education. The data was categorized into three: vocabulary education, grammar education and corpus data construction methods. The analysis results suggest that the number of corpus studies in the field of Korean education is not large enough but continues to increase, especially in the research on data construction tools. This suggests there is a significant demand in corpus driven studies in Korean education field.

Automatic Generation of Code-clone Reference Corpus (코드클론 표본 집합체 자동 생성기)

  • Lee, Hyo-Sub;Doh, Kyung-Goo
    • Journal of Software Assessment and Valuation
    • /
    • v.7 no.1
    • /
    • pp.29-39
    • /
    • 2011
  • To evaluate the quality of clone detection tools, we should know how many clones the tool misses. Hence we need to have the standard code-clone reference corpus for a carefully chosen set of sample source codes. The reference corpus available so far has been built by manually collecting clones from the results of various existing tools. This paper presents a tree-pattern-based clone detection tool that can be used for automatic generation of reference corpus. Our tool is compared with CloneDR for precision and Bellon's reference corpus for recall. Our tool finds no false positives and 2 to 3 times more clones than CloneDR. Compared to Bellon's reference corpus, our tools shows the 93%-to-100% recall rate and detects far more clones.

Detecting and correcting errors in Korean POS-tagged corpora (한국어 품사 부착 말뭉치의 오류 검출 및 수정)

  • Choi, Myung-Gil;Seo, Hyung-Won;Kwon, Hong-Seok;Kim, Jae-Hoon
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.37 no.2
    • /
    • pp.227-235
    • /
    • 2013
  • The quality of the part-of-speech (POS) annotation in a corpus plays an important role in developing POS taggers. There, however, are several kinds of errors in Korean POS-tagged corpora like Sejong Corpus. Such errors are likely to be various like annotation errors, spelling errors, insertion and/or deletion of unexpected characters. In this paper, we propose a method for detecting annotation errors using error patterns, and also develop a tool for effectively correcting them. Overall, based on the proposed method, we have hand-corrected annotation errors in Sejong POS Tagged Corpus using the developed tool. As the result, it is faster at least 9 times when compared without using any tools. Therefore we have observed that the proposed method is effective for correcting annotation errors in POS-tagged corpus.

Semi-Automatic Annotation Tool to Build Large Dependency Tree-Tagged Corpus

  • Park, Eun-Jin;Kim, Jae-Hoon;Kim, Chang-Hyun;Kim, Young-Kill
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.385-393
    • /
    • 2007
  • Corpora annotated with lots of linguistic information are required to develop robust and statistical natural language processing systems. Building such corpora, however, is an expensive, labor-intensive, and time-consuming work. To help the work, we design and implement an annotation tool for establishing a Korean dependency tree-tagged corpus. Compared with other annotation tools, our tool is characterized by the following features: independence of applications, localization of errors, powerful error checking, instant annotated information sharing, user-friendly. Using our tool, we have annotated 100,904 Korean sentences with dependency structures. The number of annotators is 33, the average annotation time is about 4 minutes per sentence, and the total period of the annotation is 5 months. We are confident that we can have accurate and consistent annotations as well as reduced labor and time.

  • PDF

PPEditor: Semi-Automatic Annotation Tool for Korean Dependency Structure (PPEditor: 한국어 의존구조 부착을 위한 반자동 말뭉치 구축 도구)

  • Kim Jae-Hoon;Park Eun-Jin
    • The KIPS Transactions:PartB
    • /
    • v.13B no.1 s.104
    • /
    • pp.63-70
    • /
    • 2006
  • In general, a corpus contains lots of linguistic information and is widely used in the field of natural language processing and computational linguistics. The creation of such the corpus, however, is an expensive, labor-intensive and time-consuming work. To alleviate this problem, annotation tools to build corpora with much linguistic information is indispensable. In this paper, we design and implement an annotation tool for establishing a Korean dependency tree-tagged corpus. The most ideal way is to fully automatically create the corpus without annotators' interventions, but as a matter of fact, it is impossible. The proposed tool is semi-automatic like most other annotation tools and is designed to edit errors, which are generated by basic analyzers like part-of-speech tagger and (partial) parser. We also design it to avoid repetitive works while editing the errors and to use it easily and friendly. Using the proposed annotation tool, 10,000 Korean sentences containing over 20 words are annotated with dependency structures. For 2 months, eight annotators have worked every 4 hours a day. We are confident that we can have accurate and consistent annotations as well as reduced labor and time.

A Corpus-Based Study on Korean EFL Learners' Use of English Logical Connectors

  • Ha, Myung-Jeong
    • International Journal of Contents
    • /
    • v.10 no.4
    • /
    • pp.48-52
    • /
    • 2014
  • The purpose of this study was to examine 30 logical connectors in the essay writing of Korean university students for comparison with the use in similar types of native English writing. The main questions addressed were as follows: Do Korean EFL students tend to over- or underuse logical connectors? What types of connectors differentiate Korean learners from native use? To answer these questions, EFL learner data were compared with data from native speakers using computerized corpora and linguistic software tools to speed up the initial stage of the linguistic analysis. The analysis revealed that Korean EFL learners tend to overuse logical connectors in the initial position of the sentence, and that they tend to overuse additive connectors such as 'moreover', 'besides', and 'furthermore', whereas they underuse contrastive connectors such as 'yet' and 'instead'. On the basis of the results of this study, some pedagogical implications are made concerning the need for teaching of the semantic, stylistic, and syntactic behavior of logical connectors.

Analysis of the English Textbooks in North Korean First Middle School (북한 제1중학교 영어교과서 분석)

  • Hwang, Seo-yeon;Kim, Jeong-ryeol
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.11
    • /
    • pp.242-251
    • /
    • 2017
  • For the purposes of this research, a corpus of words was created from the English textbooks of the "First Middle School" for the gifted in North Korea, and using the corpus, their linguistic characteristics were analyzed. Although there have been many studies that identified the traits of English textbooks in the North Korea's general middle school, not much focus has been placed on the English textbooks used at North Korea's First Middle School. Initially, the structure of English textbooks of the first, second, fourth, and sixth grades that had been procured from the Information Center on North Korea was reviewed, after which their corpus was created. Then, by using Wordsmith Tools 7.0, linguistic properties and high frequency content words appeared in the English textbook of the first grade were analyzed specifically. Basic statistical data gathered indicated that while the number of vocabulary did not increase as students progress through the grades, the words used tended to diversify incrementally. In the mean time, a distribution of the high frequency content words by grade illustrated that a big difference was found between the content words used in the English texts of each grade, and it was a subject matter of the texts that determined such difference.

Development of a Baseline Platform for Spoken Dialog Recognition System (대화음성인식 시스템 구현을 위한 기본 플랫폼 개발)

  • Chung Minhwa;Seo Jungyun;Lee Yong-Jo;Han Myungsoo
    • Proceedings of the KSPS conference
    • /
    • 2003.05a
    • /
    • pp.32-35
    • /
    • 2003
  • This paper describes our recent work for developing a baseline platform for Korean spoken dialog recognition. In our work, We have collected about 65 hour speech corpus with auditory transcriptions. Linguistic information on various levels such as mophology, syntax, semantics, and discourse is attached to the speech database by using automatic or semi-automatic tools for tagging linguistic information.

  • PDF