• Title/Summary/Keyword: Part-of-speech tagging for Korean

Search Result 38, Processing Time 0.038 seconds

Morpheme Recovery Based on Naïve Bayes Model (NB 모델을 이용한 형태소 복원)

  • Kim, Jae-Hoon;Jeon, Kil-Ho
    • The KIPS Transactions:PartB
    • /
    • v.19B no.3
    • /
    • pp.195-200
    • /
    • 2012
  • In Korean, spelling change in various forms must be recovered into base forms in morphological analysis as well as part-of-speech (POS) tagging is difficult without morphological analysis because Korean is agglutinative. This is one of notorious problems in Korean morphological analysis and has been solved by morpheme recovery rules, which generate morphological ambiguity resolved by POS tagging. In this paper, we propose a morpheme recovery scheme based on machine learning methods like Na$\ddot{i}$ve Bayes models. Input features of the models are the surrounding context of the syllable which the spelling change is occurred and categories of the models are the recovered syllables. The POS tagging system with the proposed model has demonstrated the $F_1$-score of 97.5% for the ETRI tree-tagged corpus. Thus it can be decided that the proposed model is very useful to handle morpheme recovery in Korean.

Sequence-to-sequence based Morphological Analysis and Part-Of-Speech Tagging for Korean Language with Convolutional Features (Sequence-to-sequence 기반 한국어 형태소 분석 및 품사 태깅)

  • Li, Jianri;Lee, EuiHyeon;Lee, Jong-Hyeok
    • Journal of KIISE
    • /
    • v.44 no.1
    • /
    • pp.57-62
    • /
    • 2017
  • Traditional Korean morphological analysis and POS tagging methods usually consist of two steps: 1 Generat hypotheses of all possible combinations of morphemes for given input, 2 Perform POS tagging search optimal result. require additional resource dictionaries and step could error to the step. In this paper, we tried to solve this problem end-to-end fashion using sequence-to-sequence model convolutional features. Experiment results Sejong corpus sour approach achieved 97.15% F1-score on morpheme level, 95.33% and 60.62% precision on word and sentence level, respectively; s96.91% F1-score on morpheme level, 95.40% and 60.62% precision on word and sentence level, respectively.

Lessons from Developing an Annotated Corpus of Patient Histories

  • Rost, Thomas Brox;Huseth, Ola;Nytro, Oystein;Grimsmo, Anders
    • Journal of Computing Science and Engineering
    • /
    • v.2 no.2
    • /
    • pp.162-179
    • /
    • 2008
  • We have developed a tool for annotation of electronic health record (EHR) data. Currently we are in the process of manually annotating a corpus of Norwegian general practitioners' EHRs with mainly linguistic information. The purpose of this project is to attain a linguistically annotated corpus of patient histories from general practice. This corpus will be put to future use in medical language processing and information extraction applications. The paper outlines some of our practical experiences from developing such a corpus and, in particular, the effects of semi-automated annotation. We have also done some preliminary experiments with part-of-speech tagging based on our corpus. The results indicated that relevant training data from the clinical domain gives better results for the tagging task in this domain than training the tagger on a corpus form a more general domain. We are planning to expand the corpus annotations with medical information at a later stage.

Integrated Indexing Method using Compound Noun Segmentation and Noun Phrase Synthesis (복합명사 분할과 명사구 합성을 이용한 통합 색인 기법)

  • Won, Hyung-Suk;Park, Mi-Hwa;Lee, Geun-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.1
    • /
    • pp.84-95
    • /
    • 2000
  • In this paper, we propose an integrated indexing method with compound noun segmentation and noun phrase synthesis. Statistical information is used in the compound noun segmentation and natural language processing techniques are carefully utilized in the noun phrase synthesis. Firstly, we choose index terms from simple words through morphological analysis and part-of-speech tagging results. Secondly, noun phrases are automatically synthesized from the syntactic analysis results. If syntactic analysis fails, only morphological analysis and tagging results are applied. Thirdly, we select compound nouns from the tagging results and then segment and re-synthesize them using statistical information. In this way, segmented and synthesized terms are used together as index terms to supplement the single terms. We demonstrate the effectiveness of the proposed integrated indexing method for Korean compound noun processing using KTSET2.0 and KRIST SET which are a standard test collection for Korean information retrieval.

  • PDF

Korean Lexical Disambiguation Based on Statistical Information (통계정보에 기반을 둔 한국어 어휘중의성해소)

  • 박하규;김영택
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.2
    • /
    • pp.265-275
    • /
    • 1994
  • Lexical disambiguation is one of the most basic areas in natural language processing such as speech recognition/synthesis, information retrieval, corpus tagging/ etc. This paper describes a Korean lexical disambiguation mechanism where the disambigution is perfoemed on the basis of the statistical information collected from corpora. In this mechanism, the token tags corresponding to the results of the morphological analysis are used instead of part of speech tags for the purpose of detail disambiguation. The lexical selection function proposed shows considerably high accuracy, since the lexical characteristics of Korean such as concordance of endings or postpositions are well reflected in it. Two disambiguation methods, a unique selection method and a multiple selection method, are provided so that they can be properly according to the application areas.

  • PDF

Predicting the Unemployment Rate Using Social Media Analysis

  • Ryu, Pum-Mo
    • Journal of Information Processing Systems
    • /
    • v.14 no.4
    • /
    • pp.904-915
    • /
    • 2018
  • We demonstrate how social media content can be used to predict the unemployment rate, a real-world indicator. We present a novel method for predicting the unemployment rate using social media analysis based on natural language processing and statistical modeling. The system collects social media contents including news articles, blogs, and tweets written in Korean, and then extracts data for modeling using part-of-speech tagging and sentiment analysis techniques. The autoregressive integrated moving average with exogenous variables (ARIMAX) and autoregressive with exogenous variables (ARX) models for unemployment rate prediction are fit using the analyzed data. The proposed method quantifies the social moods expressed in social media contents, whereas the existing methods simply present social tendencies. Our model derived a 27.9% improvement in error reduction compared to a Google Index-based model in the mean absolute percentage error metric.

Towards Effective Entity Extraction of Scientific Documents using Discriminative Linguistic Features

  • Hwang, Sangwon;Hong, Jang-Eui;Nam, Young-Kwang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1639-1658
    • /
    • 2019
  • Named entity recognition (NER) is an important technique for improving the performance of data mining and big data analytics. In previous studies, NER systems have been employed to identify named-entities using statistical methods based on prior information or linguistic features; however, such methods are limited in that they are unable to recognize unregistered or unlearned objects. In this paper, a method is proposed to extract objects, such as technologies, theories, or person names, by analyzing the collocation relationship between certain words that simultaneously appear around specific words in the abstracts of academic journals. The method is executed as follows. First, the data is preprocessed using data cleaning and sentence detection to separate the text into single sentences. Then, part-of-speech (POS) tagging is applied to the individual sentences. After this, the appearance and collocation information of the other POS tags is analyzed, excluding the entity candidates, such as nouns. Finally, an entity recognition model is created based on analyzing and classifying the information in the sentences.

Comparison between Markov Model and Hidden Markov Model for Korean Part-of-Speech and Homograph Tagging (한국어 품사 및 동형이의어 태깅을 위한 마르코프 모델과 은닉 마르코프 모델의 비교)

  • Shin, Joon-Choul;Ock, Cheol-Young
    • Annual Conference on Human and Language Technology
    • /
    • 2013.10a
    • /
    • pp.152-155
    • /
    • 2013
  • 한국어 어절은 많은 동형이의어를 가지고 있기 때문에 주변 어절(또는 문맥)을 보지 않으면 중의성을 해결하기 어렵다. 이런 중의성을 해결하기 위해서 주변 어절 정보를 입력받아 통계적으로 의미를 선택하는 기계학습 알고리즘들이 많이 연구되었으며, 그 중에서 특히 은닉 마르코프 모델을 활용한 연구가 높은 성과를 거두었다. 일반적으로 마르코프 모델만을 기반으로 알고리즘을 구성할 경우 은닉 마르코프 모델 보다는 단순하기 때문에 빠르게 작동하지만 정확률이 낮다. 본 논문은 마르코프 모델을 기반으로 하면서, 부분적으로 은닉 마르코프 모델을 혼합한 알고리즘을 제안한다. 실험 결과 속도는 마르코프 모델과 유사하며, 정확률은 은닉 마르코프 모델에 근접한 것으로 나타났다.

  • PDF

A Pipeline Model for Korean Morphological Analysis and Part-of-Speech Tagging Using Sequence-to-Sequence and BERT-LSTM (Sequence-to-Sequence 와 BERT-LSTM을 활용한 한국어 형태소 분석 및 품사 태깅 파이프라인 모델)

  • Youn, Jun Young;Lee, Jae Sung
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.414-417
    • /
    • 2020
  • 최근 한국어 형태소 분석 및 품사 태깅에 관한 연구는 주로 표층형에 대해 형태소 분리와 품사 태깅을 먼저하고, 추가 언어자원을 사용하여 후처리로 형태소 원형과 품사를 복원해왔다. 본 연구에서는 형태소 분석 및 품사 태깅을 두 단계로 나누어, Sequence-to-Sequence를 활용하여 형태소 원형 복원을 먼저 하고, 최근 자연어처리의 다양한 분야에서 우수한 성능을 보이는 BERT를 활용하여 형태소 분리 및 품사 태깅을 하였다. 본 논문에서는 두 단계를 파이프라인으로 연결하였고, 제안하는 형태소 분석 및 품사 태깅 파이프라인 모델은 음절 정확도가 98.39%, 형태소 정확도 98.27%, 어절 정확도 96.31%의 성능을 보였다.

  • PDF

Light Weight Korean Morphological Analysis Using Left-longest-match-preference model and Hidden Markov Model (좌최장일치법과 HMM을 결합한 경량화된 한국어 형태소 분석)

  • Kang, Sangwoo;Yang, Jaechul;Seo, Jungyun
    • Korean Journal of Cognitive Science
    • /
    • v.24 no.2
    • /
    • pp.95-109
    • /
    • 2013
  • With the rapid evolution of the personal device environment, the demand for natural language applications is increasing. This paper proposes a morpheme segmentation and part-of-speech tagging model, which provides the first step module of natural language processing for many languages; the model is designed for mobile devices with limited hardware resources. To reduce the number of morpheme candidates in morphological analysis, the proposed model uses a method that adds highly possible morpheme candidates to the original outputs of a conventional left-longest-match-preference method. To reduce the computational cost and memory usage, the proposed model uses a method that simplifies the process of calculating the observation probability of a word consisting of one or more morphemes in a conventional hidden Markov model.

  • PDF