• 제목/요약/키워드: Corpus-based

Search Result 568, Processing Time 0.025 seconds

Opinion: Strategy of Semi-Automatically Annotating a Full-Text Corpus of Genomics & Informatics

  • Park, Hyun-Seok
    • Genomics & Informatics
    • /
    • v.16 no.4
    • /
    • pp.40.1-40.3
    • /
    • 2018
  • There is a communal need for an annotated corpus consisting of the full texts of biomedical journal articles. In response to community needs, a prototype version of the full-text corpus of Genomics & Informatics, called GNI version 1.0, has recently been published, with 499 annotated full-text articles available as a corpus resource. However, GNI needs to be updated, as the texts were shallow-parsed and annotated with several existing parsers. I list issues associated with upgrading annotations and give an opinion on the methodology for developing the next version of the GNI corpus, based on a semi-automatic strategy for more linguistically rich corpus annotation.

'Because of Doing' and 'Because of Happening': A Corpus-based Analysis of Korean Causal Conjunctives, -nula(ko) and -nun palamey

  • Oh, Sang-Suk
    • Language and Information
    • /
    • v.8 no.2
    • /
    • pp.131-147
    • /
    • 2004
  • the two Korean causal conjunctive suffixes, -nula(ko) and -nun palamey, based on corpus linguistic analysis. Many of the linguistic accounts available, both in pedagogical reference and in the literature on linguistics, provide incomplete analyses of these suffixes, based on fabricated linguistic data. Using naturally occurring, real linguistic data, this paper examines the syntactic and semantic structures of the two causal suffixes through a consideration of three areas of corpus linguistic analysis: token frequencies, collocations, and semantic prosody. An analysis based on concordance data reveals that the two causal connectives, -nula(ko) and -nun palamey, have more differences than similarities in terms of syntactic and semantic constraints. The idiosyncratic structures of the two suffixes are discussed in terms of same subject condition, verb selection, same agent condition, synchronicity condition, and negative semantic prosody.

  • PDF

Usage analysis of vocabulary in Korean high school English textbooks using multiple corpora (코퍼스를 통한 고등학교 영어교과서의 어휘 분석)

  • Kim, Young-Mi;Suh, Jin-Hee
    • English Language & Literature Teaching
    • /
    • v.12 no.4
    • /
    • pp.139-157
    • /
    • 2006
  • As the Communicative Approach has become the norm in foreign language teaching, the objectives of teaching English in school have changed radically in Korea. The focus in high school English textbooks has shifted from mere mastery of structures to communicative proficiency. This paper will study five polysemous words which appear in twelve high school English textbooks used in Korea. The twelve text books are incorporated into a single corpus and analyzed to classify the usage of the selected words. Then the usage of each word was compared with that of three other corpora based sources: the BNC(British National Corpus) Sampler, ICE Singapore(International Corpus of English for Singapore) and Collins COBUILD learner's dictionary which is based on the corpus, "The Bank of English". The comparisons carried out as part of this study will demonstrate that Korean text books do not always supply the full range of meanings of polysemous words.

  • PDF

An Attempt to Measure the Familiarity of Specialized Japanese in the Nursing Care Field

  • Haihong Huang;Hiroyuki Muto;Toshiyuki Kanamaru
    • Asia Pacific Journal of Corpus Research
    • /
    • v.4 no.2
    • /
    • pp.57-74
    • /
    • 2023
  • Having a firm grasp of technical terms is essential for learners of Japanese for Specific Purposes (JSP). This research aims to analyze Japanese nursing care vocabulary based on objective corpus-based frequency and subjectively rated word familiarity. For this purpose, we constructed a text corpus centered on the National Examination for Certified Care Workers to extract nursing care keywords. The Log-Likelihood Ratio (LLR) was used as the statistical criterion for keyword identification, giving a list of 300 keywords as target words for a further word recognition survey. The survey involved 115 participants of whom 51 were certified care workers (CW group) and 64 were individuals from the general public (GP group). These participants rated the familiarity of the target keywords through crowdsourcing. Given the limited sample size, Bayesian linear mixed models were utilized to determine word familiarity rates. Our study conducted a comparative analysis of word familiarity between the CW group and the GP group, revealing key terms that are crucial for professionals but potentially unfamiliar to the general public. By focusing on these terms, instructors can bridge the knowledge gap more efficiently.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Use of ultrasonography for improving reproductive efficiency in cows II. Comparative evaluation of ovarian structures using ultrasonography and plasma progesterone analysis in subestrous dairy cows (초음파 진단장치를 이용한 축우의 번식효율증진에 관한 연구 II. 무발정 젖소에서 초음파검사 및 progesterone 농도측정에 의한 난소 구조물의 비교평가)

  • Son, Chang-ho;Kang, Byong-kyu;Choi, Han-sun;Kang, Hyun-gu;Paik, In-seok;Suh, Guk-hyun
    • Korean Journal of Veterinary Research
    • /
    • v.38 no.3
    • /
    • pp.642-651
    • /
    • 1998
  • The accuracy of ultrasonography for determining the presence of a functional corpus luteum in subestrous dairy cows was investigated, using a radioimmunoassay for progesterone in plasma. Luteal status (high or low progesterone concentrations) was diagnosed in 534 cows, using B-mode transrectal ultrasonography. Accuracy of ultrasonography was 96.3% and 88.8% in the cows with and without functional corpus luteum, respectively. In 362 cows diagnosed with functional corpus luteum by ultrasonographic examination, 20 cows were diagnosed with the non-functional corpus luteum by analysis of plasma progesterone concentrations (false positive). In 172 cows with non-functional corpus luteum by ultrasonographic examination, 13 cows were diagnosed with the functional corpus luteum based on plasma progesterone assay (false negative). Most of the corpus luteum with well-defined border and homogeneous echotexture were diagnosed with functional corpus luteum. All cows that were not detected a corpus luteum by ultrasonographic examination were diagnosed as non-functional corpus luteum. The corpus luteum of cows that were diagnosed with false positive appeared homogeneous echotexture and above 15 mm in diameter, but the corpus luteum was the non-functional corpus luteum within Day 5 (Day 0 is ovulation day) or after Day 19. The corpus luteum of cows that were diagnosed with false negative appeared heterogeneous echogenicity and below 15 mm in diameter, but the corpus luteum was the functional corpus luteum after Day 5 or around Day 17. It was concluded that accuracy of ultrasonography was excellent for determining the presence of a functional corpus luteum in subestrous dairy cows. The corpus luteum that was diagnosed with false positive or false negative was the developing or regressing states. Thus, ultrasonography was required a serial examination of two or three times accurately diagnosing these corpus luteum.

  • PDF

design and Implementation of English part of speech tagging system by transformation rule base. (변형 규칙 기반 영어 품사 태깅 시스템의 설계 및 구현)

  • 이태식;이상윤최병욱김한우
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.527-530
    • /
    • 1998
  • In this paper, a transformation-based English part of speech tagging system is designed and implemented. The tagging system tags raw corpus at first and the transformation rule correct the errors. Apart from traditional rule based tagging system, this system makes rules automatically. Using 60,000 words of corpus as a training corpus, the transformation rules are generated automatically by iterative training. The idea how to calculate positive effect of transformation and select transformation rules is proposed to generate more effective and correct transformations. In this paper, part of the Brown corpus and English text is used for experimental data. And the performance of transformation based tagging system is demonstrated by the calculation of accuracy.

  • PDF

Automatic Extraction of Collocations based on Corpus using mutual information (말뭉치에 기반한 상호정보를 이용한 연어의 자동 추출)

  • Lee, Ho-Suk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.4
    • /
    • pp.461-468
    • /
    • 1994
  • This paper describes the automatic extraction of collocations based on corpus. The collocations are extracted from corpus using cooccurrence frequency and mutual information between words. In English, 5 types of collocations are defined. These collocations are transitive verb and object, intransitive verb and subject, adjective and noun, verb and adverb, and adverb and adjective. In this paper another type of collocation is recognized and extracted, which consists of verb and preposition. So 6 types of collocations are extracted based on corpus.

  • PDF

Noun Sense Disambiguation Based-on Corpus and Conceptual Information (말뭉치와 개념정보를 이용한 명사 중의성 해소 방법)

  • 이휘봉;허남원;문경희;이종혁
    • Korean Journal of Cognitive Science
    • /
    • v.10 no.2
    • /
    • pp.1-10
    • /
    • 1999
  • This paper proposes a noun sense disambiguation method based-on corpus and conceptual information. Previous research has restricted the use of linguistic knowledge to the lexical level. Since knowledge extracted from corpus is stored in words themselves, the methods requires a large amount of space for the knowledge with low recall rate. On the contrary, we resolve noun sense ambiguity by using concept co-occurrence information extracted from an automatically sense-tagged corpus. In one experimental evaluation it achieved, on average, a precision of 82.4%, which is an improvement of the baseline by 14.6%. considering that the test corpus is completely irrelevant to the learning corpus, this is a promising result.

  • PDF

Vocabulary Coverage Improvement for Embedded Continuous Speech Recognition Using Part-of-Speech Tagged Corpus (품사 부착 말뭉치를 이용한 임베디드용 연속음성인식의 어휘 적용률 개선)

  • Lim, Min-Kyu;Kim, Kwang-Ho;Kim, Ji-Hwan
    • MALSORI
    • /
    • no.67
    • /
    • pp.181-193
    • /
    • 2008
  • In this paper, we propose a vocabulary coverage improvement method for embedded continuous speech recognition (CSR) using a part-of-speech (POS) tagged corpus. We investigate 152 POS tags defined in Lancaster-Oslo-Bergen (LOB) corpus and word-POS tag pairs. We derive a new vocabulary through word addition. Words paired with some POS tags have to be included in vocabularies with any size, but the vocabulary inclusion of words paired with other POS tags varies based on the target size of vocabulary. The 152 POS tags are categorized according to whether the word addition is dependent of the size of the vocabulary. Using expert knowledge, we classify POS tags first, and then apply different ways of word addition based on the POS tags paired with the words. The performance of the proposed method is measured in terms of coverage and is compared with those of vocabularies with the same size (5,000 words) derived from frequency lists. The coverage of the proposed method is measured as 95.18% for the test short message service (SMS) text corpus, while those of the conventional vocabularies cover only 93.19% and 91.82% of words appeared in the same SMS text corpus.

  • PDF