• Title/Summary/Keyword: corpus-based lexical analysis

Search Result 23, Processing Time 0.03 seconds

A Comparative Study on Korean Connective Morpheme '-myenseo' to the Chinese expression - based on Korean-Chinese parallel corpus (한국어 연결어미 '-면서'와 중국어 대응표현의 대조연구 -한·중 병렬 말뭉치를 기반으로)

  • YI, CHAO
    • Cross-Cultural Studies
    • /
    • v.37
    • /
    • pp.309-334
    • /
    • 2014
  • This study is based on the Korean-Chinese parallel corpus, utilizing the Korean connective morpheme '-myenseo' and contrasting with the Chinese expression. Korean learners often struggle with the use of Korean Connective Morpheme especially when there is a lexical gap between their mother language. '-myenseo' is of the most use Korean Connective Morpheme, it usually contrast to the Chinese coordinating conjunction. But according to the corpus, the contrastive Chinese expression to '-myenseo' is more than coordinating conjunction. So through this study, can help the Chinese Korean language learners learn easier while studying '-myenseo', because the variety Chinese expression are found from the parallel corpus that related to '-myenseo'. In this study, firstly discussed the semantic features and syntactic characteristics of '-myenseo'. The significant semantic features of '-myenseo' are 'simultaneous' and 'conflict'. So in this chapter the study use examples of usage to analyse the specific usage of '-myenseo'. And then this study analyse syntactic characteristics of '-myenseo' through the subject constraint, predicate constraints, temporal constraints, mood constraints, negatives constraints. then summarize them into a table. And the most important part of this study is Chapter 4. In this chapter, it contrasted the Korean connective morpheme '-myenseo' to the Chinese expression by analysing the Korean-Chinese parallel corpus. As a result of the analysis, the frequency of the Chinese expression that contrasted to '-myenseo' is summarized into

    . It can see from the table that the most common Chinese expression comparative to '-myenseo' is non-marker patterns. That means the connection of sentence in Korean can use connective morpheme what is a clarifying linguistic marker, but in Chinese it often connect the sentence by their intrinsic logical relationships. So the conclusion of this chapter is that '-myenseo' can be comparative to Chinese conjunction, expression, non-marker patterns and liberal translation patterns, which are more than Chinese conjunction that discovered before. In the last Chapter, as the conclusion part of this study, it summarized and suggest the limitations and the future research direction.

  • Automatic Construction of Korean Two-level Lexicon using Lexical and Morphological Information (어휘 및 형태 정보를 이용한 한국어 Two-level 어휘사전 자동 구축)

    • Kim, Bogyum;Lee, Jae Sung
      • KIPS Transactions on Software and Data Engineering
      • /
      • v.2 no.12
      • /
      • pp.865-872
      • /
      • 2013
    • Two-level morphology analysis method is one of rule-based morphological analysis method. This approach handles morphological transformation using rules and analyzes words with morpheme connection information in a lexicon. It is independent of language and Korean Two-level system was also developed. But, it was limited in practical use, because of using very small set of lexicon built manually. And it has also a over-generation problem. In this paper, we propose an automatic construction method of Korean Two-level lexicon for PC-KIMMO from morpheme tagged corpus. We also propose a method to solve over-generation problem using lexical information and sub-tags. The experiment showed that the proposed method reduced over-generation by 68% compared with the previous method, and the performance increased from 39% to 65% in f-measure.

    Part-of-speech Tagging for Hindi Corpus in Poor Resource Scenario

    • Modi, Deepa;Nain, Neeta;Nehra, Maninder
      • Journal of Multimedia Information System
      • /
      • v.5 no.3
      • /
      • pp.147-154
      • /
      • 2018
    • Natural language processing (NLP) is an emerging research area in which we study how machines can be used to perceive and alter the text written in natural languages. We can perform different tasks on natural languages by analyzing them through various annotational tasks like parsing, chunking, part-of-speech tagging and lexical analysis etc. These annotational tasks depend on morphological structure of a particular natural language. The focus of this work is part-of-speech tagging (POS tagging) on Hindi language. Part-of-speech tagging also known as grammatical tagging is a process of assigning different grammatical categories to each word of a given text. These grammatical categories can be noun, verb, time, date, number etc. Hindi is the most widely used and official language of India. It is also among the top five most spoken languages of the world. For English and other languages, a diverse range of POS taggers are available, but these POS taggers can not be applied on the Hindi language as Hindi is one of the most morphologically rich language. Furthermore there is a significant difference between the morphological structures of these languages. Thus in this work, a POS tagger system is presented for the Hindi language. For Hindi POS tagging a hybrid approach is presented in this paper which combines "Probability-based and Rule-based" approaches. For known word tagging a Unigram model of probability class is used, whereas for tagging unknown words various lexical and contextual features are used. Various finite state machine automata are constructed for demonstrating different rules and then regular expressions are used to implement these rules. A tagset is also prepared for this task, which contains 29 standard part-of-speech tags. The tagset also includes two unique tags, i.e., date tag and time tag. These date and time tags support all possible formats. Regular expressions are used to implement all pattern based tags like time, date, number and special symbols. The aim of the presented approach is to increase the correctness of an automatic Hindi POS tagging while bounding the requirement of a large human-made corpus. This hybrid approach uses a probability-based model to increase automatic tagging and a rule-based model to bound the requirement of an already trained corpus. This approach is based on very small labeled training set (around 9,000 words) and yields 96.54% of best precision and 95.08% of average precision. The approach also yields best accuracy of 91.39% and an average accuracy of 88.15%.

    Company Name Discrimination in Tweets using Topic Signatures Extracted from News Corpus

    • Hong, Beomseok;Kim, Yanggon;Lee, Sang Ho
      • Journal of Computing Science and Engineering
      • /
      • v.10 no.4
      • /
      • pp.128-136
      • /
      • 2016
    • It is impossible for any human being to analyze the more than 500 million tweets that are generated per day. Lexical ambiguities on Twitter make it difficult to retrieve the desired data and relevant topics. Most of the solutions for the word sense disambiguation problem rely on knowledge base systems. Unfortunately, it is expensive and time-consuming to manually create a knowledge base system, resulting in a knowledge acquisition bottleneck. To solve the knowledge-acquisition bottleneck, a topic signature is used to disambiguate words. In this paper, we evaluate the effectiveness of various features of newspapers on the topic signature extraction for word sense discrimination in tweets. Based on our results, topic signatures obtained from a snippet feature exhibit higher accuracy in discriminating company names than those from the article body. We conclude that topic signatures extracted from news articles improve the accuracy of word sense discrimination in the automated analysis of tweets.

    Morphological Analyzer of Yonsei Univ., morany: Morphological Analysis based on Large Lexical Database Extracted from Corpus (연세대 형태소 분석기 morany: 말뭉치로부터 추출한 대량의 어휘 데이터베이스에 기반한 형태소 분석)

    • Yoon, Jun-Tae;Lee, Chung-Hee;Kim, Seon-Ho;Song, Man-Suk
      • Annual Conference on Human and Language Technology
      • /
      • 1999.10d
      • /
      • pp.92-98
      • /
      • 1999
    • 본 논문에서는 연세대학교 컴퓨터과학과에서 연구되어 온 형태소 분석 시스템에 대해 설명한다. 연세대학교 자연 언어 처리 시스템의 기본적인 바탕은 무엇보다도 대량의 말뭉치를 기반으로 하고 있다는 점이다. 예컨대, 형태소 분석 사전은 말뭉치 처리에 의해 재구성 되었으며, 3000만 어절로부터 추출되어 수작업에 의해 다듬어진 어휘 데이터베이스는 형태소 분석 결과의 상당 부분을 제한하여 일차적인 중의성 해결의 역할을 담당한다. 또한 복합어 분석 역시 말뭉치에서 얻어진 사전을 바탕으로 이루어진다. 품사 태깅은 bigram hmm에 기반하고 있으며 어휘 규칙 등에 의한 후처리가 보강되어 있다. 이렇게 구성된 형태소 분석기 및 품사 태거는 구문 분석기와 함께 연결되어 이용되고 있다.

    • PDF

    Analysis on Vocabulary Used in School Newsletters of Korean elementary Schools: Focus on the areas of Busan, Ulsan and Gyeongnam (한국 초등학교 가정통신문의 어휘 특성 연구 -부산·울산·경남 지역을 중심으로-)

    • Kang, Hyunju
      • Journal of Korean language education
      • /
      • v.29 no.2
      • /
      • pp.1-23
      • /
      • 2018
    • This study aims to analyze words and phrases which are frequently used in newsletters from Korean elementary schools. In order to achieve this goal, high frequent words from school newsletters were selected and classified into content and function words, and the domains of the words were looked up. For this study 1,000 school newsletters were collected in the areas of Busan, Ulsan and Gyeongnam. In terms of parts of speech, nouns, especially common nouns, most frequently appeared in the school newsletters followed by verbs and adjectives. This result shows that for immigrant women who have basic knowledge on Korean language, it is useful to give translated words to get the message of school letters. Furthermore, school related terms such as facilities, regulations and activities of school and Chinese-based vocabularies are found in school newsletters. In case of verbs, the words which contain the meaning of requests and suggestions are used the most. Adjectives which are related to positive value and evaluation, and describing weather and season is frequently used as well.

    Building and Analyzing Panic Disorder Social Media Corpus for Automatic Deep Learning Classification Model (딥러닝 자동 분류 모델을 위한 공황장애 소셜미디어 코퍼스 구축 및 분석)

    • Lee, Soobin;Kim, Seongdeok;Lee, Juhee;Ko, Youngsoo;Song, Min
      • Journal of the Korean Society for information Management
      • /
      • v.38 no.2
      • /
      • pp.153-172
      • /
      • 2021
    • This study is to create a deep learning based classification model to examine the characteristics of panic disorder and to classify the panic disorder tendency literature by the panic disorder corpus constructed for the present study. For this purpose, 5,884 documents of the panic disorder corpus collected from social media were directly annotated based on the mental disease diagnosis manual and were classified into panic disorder-prone and non-panic-disorder documents. Then, TF-IDF scores were calculated and word co-occurrence analysis was performed to analyze the lexical characteristics of the corpus. In addition, the co-occurrence between the symptom frequency measurement and the annotated symptom was calculated to analyze the characteristics of panic disorder symptoms and the relationship between symptoms. We also conducted the performance evaluation for a deep learning based classification model. Three pre-trained models, BERT multi-lingual, KoBERT, and KcBERT, were adopted for classification model, and KcBERT showed the best performance among them. This study demonstrated that it can help early diagnosis and treatment of people suffering from related symptoms by examining the characteristics of panic disorder and expand the field of mental illness research to social media.

    A Model of English Part-Of-Speech Determination for English-Korean Machine Translation (영한 기계번역에서의 영어 품사결정 모델)

    • Kim, Sung-Dong;Park, Sung-Hoon
      • Journal of Intelligence and Information Systems
      • /
      • v.15 no.3
      • /
      • pp.53-65
      • /
      • 2009
    • The part-of-speech determination is necessary for resolving the part-of-speech ambiguity in English-Korean machine translation. The part-of-speech ambiguity causes high parsing complexity and makes the accurate translation difficult. In order to solve the problem, the resolution of the part-of-speech ambiguity must be performed after the lexical analysis and before the parsing. This paper proposes the CatAmRes model, which resolves the part-of-speech ambiguity, and compares the performance with that of other part-of-speech tagging methods. CatAmRes model determines the part-of-speech using the probability distribution from Bayesian network training and the statistical information, which are based on the Penn Treebank corpus. The proposed CatAmRes model consists of Calculator and POSDeterminer. Calculator calculates the degree of appropriateness of the partof-speech, and POSDeterminer determines the part-of-speech of the word based on the calculated values. In the experiment, we measure the performance using sentences from WSJ, Brown, IBM corpus.

    • PDF

    An Example-Based Engligh Learing Environment for Writing

    • Miyoshi, Yasuo;Ochi, Youji;Okamoto, Ryo;Yano, Yoneo
      • Proceedings of the Korea Inteligent Information System Society Conference
      • /
      • 2001.01a
      • /
      • pp.292-297
      • /
      • 2001
    • In writing learning as a second/foreign language, a learner has to acquire not only lexical and syntactical knowledge but also the skills to choose suitable words for content which s/he is interested in. A learning system should extrapolate learner\\`s intention and give example phrases that concern with the content in order to support this on the system. However, a learner cannot always represent a content of his/her desired phrase as inputs to the system. Therefore, the system should be equipped with a diagnosis function for learner\\`s intention. Additionally, a system also should be equipped with an analysis function to score similarity between learner\\`s intention and phrases which is stored in the system on both syntactic and idiomatic level in order to present appropriate example phrases to a learner. In this paper, we propose architecture of an interactive support method for English writing learning which is based an analogical search technique of sample phrases from corpora. Our system can show a candidate of variation/next phrases to write and an analogous sentence that a learner wants to represents from corpora.

    • PDF

    Intra-Sentence Segmentation using Maximum Entropy Model for Efficient Parsing of English Sentences (효율적인 영어 구문 분석을 위한 최대 엔트로피 모델에 의한 문장 분할)

    • Kim Sung-Dong
      • Journal of KIISE:Software and Applications
      • /
      • v.32 no.5
      • /
      • pp.385-395
      • /
      • 2005
    • Long sentence analysis has been a critical problem in machine translation because of high complexity. The methods of intra-sentence segmentation have been proposed to reduce parsing complexity. This paper presents the intra-sentence segmentation method based on maximum entropy probability model to increase the coverage and accuracy of the segmentation. We construct the rules for choosing candidate segmentation positions by a teaming method using the lexical context of the words tagged as segmentation position. We also generate the model that gives probability value to each candidate segmentation positions. The lexical contexts are extracted from the corpus tagged with segmentation positions and are incorporated into the probability model. We construct training data using the sentences from Wall Street Journal and experiment the intra-sentence segmentation on the sentences from four different domains. The experiments show about $88\%$ accuracy and about $98\%$ coverage of the segmentation. Also, the proposed method results in parsing efficiency improvement by 4.8 times in speed and 3.6 times in space.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.