• Title/Summary/Keyword: POS tagging

Search Result 73, Processing Time 0.026 seconds

Improved Character-Based Neural Network for POS Tagging on Morphologically Rich Languages

  • Samat Ali;Alim Murat
    • Journal of Information Processing Systems
    • /
    • v.19 no.3
    • /
    • pp.355-369
    • /
    • 2023
  • Since the widespread adoption of deep-learning and related distributed representation, there have been substantial advancements in part-of-speech (POS) tagging for many languages. When training word representations, morphology and shape are typically ignored, as these representations rely primarily on collecting syntactic and semantic aspects of words. However, for tasks like POS tagging, notably in morphologically rich and resource-limited language environments, the intra-word information is essential. In this study, we introduce a deep neural network (DNN) for POS tagging that learns character-level word representations and combines them with general word representations. Using the proposed approach and omitting hand-crafted features, we achieve 90.47%, 80.16%, and 79.32% accuracy on our own dataset for three morphologically rich languages: Uyghur, Uzbek, and Kyrgyz. The experimental results reveal that the presented character-based strategy greatly improves POS tagging performance for several morphologically rich languages (MRL) where character information is significant. Furthermore, when compared to the previously reported state-of-the-art POS tagging results for Turkish on the METU Turkish Treebank dataset, the proposed approach improved on the prior work slightly. As a result, the experimental results indicate that character-based representations outperform word-level representations for MRL performance. Our technique is also robust towards the-out-of-vocabulary issues and performs better on manually edited text.

A Hidden Markov Model Imbedding Multiword Units for Part-of-Speech Tagging

  • Kim, Jae-Hoon;Jungyun Seo
    • Journal of Electrical Engineering and information Science
    • /
    • v.2 no.6
    • /
    • pp.7-13
    • /
    • 1997
  • Morphological Analysis of Korean has known to be a very complicated problem. Especially, the degree of part-of-speech(POS) ambiguity is much higher than English. Many researchers have tried to use a hidden Markov model(HMM) to solve the POS tagging problem and showed arround 95% correctness ratio. However, the lack of lexical information involves a hidden Markov model for POS tagging in lots of difficulties in improving the performance. To alleviate the burden, this paper proposes a method for combining multiword units, which are types of lexical information, into a hidden Markov model for POS tagging. This paper also proposes a method for extracting multiword units from POS tagged corpus. In this paper, a multiword unit is defined as a unit which consists of more than one word. We found that these multiword units are the major source of POS tagging errors. Our experiment shows that the error reduction rate of the proposed method is about 13%.

  • PDF

Syllable-based POS Tagging without Korean Morphological Analysis (형태소 분석기 사용을 배제한 음절 단위의 한국어 품사 태깅)

  • Shim, Kwang-Seob
    • Korean Journal of Cognitive Science
    • /
    • v.22 no.3
    • /
    • pp.327-345
    • /
    • 2011
  • In this paper, a new approach to Korean POS (Part-of-Speech) tagging is proposed. In previous works, a Korean POS tagger was regarded as a post-processor of a morphological analyzer, and as such a tagger was used to determine the most likely morpheme/POS sequence from morphological analysis. In the proposed approach, however, the POS tagger is supposed to generate the most likely morpheme and POS pair sequence directly from the given sentences. 398,632 eojeol POS-tagged corpus and 33,467 eojeol test data are used for training and evaluation, respectively. The proposed approach shows 96.31% of POS tagging accuracy.

  • PDF

Part-of-speech Tagging for Hindi Corpus in Poor Resource Scenario

  • Modi, Deepa;Nain, Neeta;Nehra, Maninder
    • Journal of Multimedia Information System
    • /
    • v.5 no.3
    • /
    • pp.147-154
    • /
    • 2018
  • Natural language processing (NLP) is an emerging research area in which we study how machines can be used to perceive and alter the text written in natural languages. We can perform different tasks on natural languages by analyzing them through various annotational tasks like parsing, chunking, part-of-speech tagging and lexical analysis etc. These annotational tasks depend on morphological structure of a particular natural language. The focus of this work is part-of-speech tagging (POS tagging) on Hindi language. Part-of-speech tagging also known as grammatical tagging is a process of assigning different grammatical categories to each word of a given text. These grammatical categories can be noun, verb, time, date, number etc. Hindi is the most widely used and official language of India. It is also among the top five most spoken languages of the world. For English and other languages, a diverse range of POS taggers are available, but these POS taggers can not be applied on the Hindi language as Hindi is one of the most morphologically rich language. Furthermore there is a significant difference between the morphological structures of these languages. Thus in this work, a POS tagger system is presented for the Hindi language. For Hindi POS tagging a hybrid approach is presented in this paper which combines "Probability-based and Rule-based" approaches. For known word tagging a Unigram model of probability class is used, whereas for tagging unknown words various lexical and contextual features are used. Various finite state machine automata are constructed for demonstrating different rules and then regular expressions are used to implement these rules. A tagset is also prepared for this task, which contains 29 standard part-of-speech tags. The tagset also includes two unique tags, i.e., date tag and time tag. These date and time tags support all possible formats. Regular expressions are used to implement all pattern based tags like time, date, number and special symbols. The aim of the presented approach is to increase the correctness of an automatic Hindi POS tagging while bounding the requirement of a large human-made corpus. This hybrid approach uses a probability-based model to increase automatic tagging and a rule-based model to bound the requirement of an already trained corpus. This approach is based on very small labeled training set (around 9,000 words) and yields 96.54% of best precision and 95.08% of average precision. The approach also yields best accuracy of 91.39% and an average accuracy of 88.15%.

A Cost Sensitive Part-of-Speech Tagging: Differentiating Serious Errors from Minor Errors

  • Son, Jeong-Woo;Noh, Tae-Gil;Park, Seong-Bae
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.1
    • /
    • pp.6-14
    • /
    • 2012
  • All types of part-of-speech (POS) tagging errors have been equally treated by existing taggers. However, the errors are not equally important, since some errors affect the performance of subsequent natural language processing seriously while others do not. This paper aims to minimize these serious errors while retaining the overall performance of POS tagging. Two gradient loss functions are proposed to reflect the different types of errors. They are designed to assign a larger cost for serious errors and a smaller cost for minor errors. Through a series of experiments, it is shown that the classifier trained with the proposed loss functions not only reduces serious errors but also achieves slightly higher accuracy than ordinary classifiers.

(Resolving Prepositional Phrase Attachment and POS Tagging Ambiguities using a Maximum Entropy Boosting Model) (최대 엔트로피 부스팅 모델을 이용한 영어 전치사구 접속과 품사 결정 모호성 해소)

  • 박성배
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.5_6
    • /
    • pp.570-578
    • /
    • 2003
  • Maximum entropy models are promising candidates for natural language modeling. However, there are two major hurdles in applying maximum entropy models to real-life language problems, such as prepositional phrase attachment: feature selection and high computational complexity. In this paper, we propose a maximum entropy boosting model to overcome these limitations and the problem of imbalanced data in natural language resources, and apply it to prepositional phrase (PP) attachment and part-of-speech (POS) tagging. According to the experimental results on Wall Street Journal corpus, the model shows 84.3% of accuracy for PP attachment and 96.78% of accuracy for POS tagging that are close to the state-of-the-art performance of these tasks only with small efforts of modeling.

Syllable-based Korean POS Tagging Based on Combining a Pre-analyzed Dictionary with Machine Learning (기분석사전과 기계학습 방법을 결합한 음절 단위 한국어 품사 태깅)

  • Lee, Chung-Hee;Lim, Joon-Ho;Lim, Soojong;Kim, Hyun-Ki
    • Journal of KIISE
    • /
    • v.43 no.3
    • /
    • pp.362-369
    • /
    • 2016
  • This study is directed toward the design of a hybrid algorithm for syllable-based Korean POS tagging. Previous syllable-based works on Korean POS tagging have relied on a sequence labeling method and mostly used only a machine learning method. We present a new algorithm integrating a machine learning method and a pre-analyzed dictionary. We used a Sejong tagged corpus for training and evaluation. While the machine learning engine achieved eojeol precision of 0.964, the proposed hybrid engine achieved eojeol precision of 0.990. In a Quiz domain test, the machine learning engine and the proposed hybrid engine obtained 0.961 and 0.972, respectively. This result indicates our method to be effective for Korean POS tagging.

A Korean POS Tagging System with Handling Corpus Errors (말뭉치 오류를 고려한 HMM 한국어 품사 태깅 시스템)

  • Seol, Yong-Soo;Kim, Dong-Joo;Kim, Kyu-Sang;Kim, Han-Woo
    • KSCI Review
    • /
    • v.15 no.1
    • /
    • pp.117-124
    • /
    • 2007
  • 통계 기반 접근 방법을 이용한 품사태깅에서 태깅 정확도는 훈련 데이터의 양에 좌우될 뿐 아니라, 말뭉치가 충분할지라도 수작업으로 구축한 말뭉치의 경우 항상 오류의 가능성을 내포하고 있으며 언어의 특성상 통계적으로 신뢰할만한 데이터의 수집에도 어려움이 따른다. 훈련 데이터로 사용되는 말뭉치는 많은 사람들이 수작업으로 구축하므로 작업자 중 일부가 언어에 대한 지식이 부족하다거나 주관적인 판단에 의한 태깅 실수를 포함할 수도 있기 때문에 단순한 저빈도와 관련된 잡음 외의 오류들이 포함될 수 있는데 이러한 오류들은 재추정이나 평탄화 기법으로 해결될 수 있는 문제가 아니다. 본 논문에서는 HMM(Hidden Markov Model)을 이용한 한국어 품사 태깅에서 재추정 후 여전히 존재하는 말뭉치의 잡음에 인한 태깅 오류 해결을 위해 비터비 알고리즘적용 단계에서 데이터 부족과 말뭉치의 오류로 인해 문제가 되는 부분을 찾아내고 규칙을 통해 수정을 하여 태깅 결과를 개선하는 방안을 제안한다. 실험결과는 오류가 존재하는 말뭉치를 사용하여 구현된 HMM과 비터비 알고리즘을 적용한 태깅 정확도에 비해 오류를 수정하는 과정을 거친 후 정확도가 향상됨을 보여준다.

  • PDF

Classification of Advertising Spam Reviews (제품 리뷰문에서의 광고성 문구 분류 연구)

  • Park, Insuk;Kang, Hanhoon;Yoo, Seong Joon
    • Annual Conference on Human and Language Technology
    • /
    • 2010.10a
    • /
    • pp.186-190
    • /
    • 2010
  • 본 논문은 쇼핑몰의 이용 후기 중 광고성 리뷰를 분류해 내는 방법을 제안한다. 여기서 광고성 리뷰는 주로 업체에서 작성하는 것으로 리뷰 안에 광고 내용이 포함되어 있다. 국외 연구 중에는 드물게 오피니언 스팸 문서의 분류 연구가 진행되고 있지만 한국어 상품평으로부터 광고성 리뷰를 분류하는 연구는 아직 이루어지지 않고 있다. 본 논문에서는 Naive Bayes Classifier를 활용하여 광고성 리뷰를 분류하였다. 이때 확률 계산을 위해 사용된 특징 단어는 POS-Tagging+Bigram, POS-Tagging+Unigram, Bigram을 사용하여 추출하였다. 실험 결과는 POS-Tagging+Bigram 방법을 이용하였을 때 광고성 리뷰의 F-Measure가 80.35%로 정확도 높았다.

  • PDF

Morpheme Recovery Based on Naïve Bayes Model (NB 모델을 이용한 형태소 복원)

  • Kim, Jae-Hoon;Jeon, Kil-Ho
    • The KIPS Transactions:PartB
    • /
    • v.19B no.3
    • /
    • pp.195-200
    • /
    • 2012
  • In Korean, spelling change in various forms must be recovered into base forms in morphological analysis as well as part-of-speech (POS) tagging is difficult without morphological analysis because Korean is agglutinative. This is one of notorious problems in Korean morphological analysis and has been solved by morpheme recovery rules, which generate morphological ambiguity resolved by POS tagging. In this paper, we propose a morpheme recovery scheme based on machine learning methods like Na$\ddot{i}$ve Bayes models. Input features of the models are the surrounding context of the syllable which the spelling change is occurred and categories of the models are the recovered syllables. The POS tagging system with the proposed model has demonstrated the $F_1$-score of 97.5% for the ETRI tree-tagged corpus. Thus it can be decided that the proposed model is very useful to handle morpheme recovery in Korean.