• Title/Summary/Keyword: Korean POS Tagging

Search Result 56, Processing Time 0.051 seconds

Lattice-based discriminative approach for Korean morphological analysis and POS tagging (래티스상의 구조적 분류에 기반한 한국어 형태소 분석 및 품사 태깅)

  • Na, Seung-Hoon;Kim, Chang-Hyun;Kim, Young-Kil
    • Annual Conference on Human and Language Technology
    • /
    • 2013.10a
    • /
    • pp.3-8
    • /
    • 2013
  • 본 논문에서는 래티스상의 구조적 분류에 기반한 한국어 형태소 분석 및 품사 태깅을 수행하는 방법을 제안한다. 제안하는 방법은 입력문이 주어질 때 어휘 사전을 참조하여, 형태소를 노드로 취하고 인접형태 소간의 에지를 갖도록 래티스를 구성하며, 구성된 래티스상 가장 점수가 높은 경로상에 있는 형태소들을 분석 결과로 제시하는 방법이다. 실험 결과, ETRI 품사 부착 코퍼스에서 기존의 1차 linear-chain CRF에 기반한 방법보다 높은 어절 정확률 그리고 문장 정확률을 얻었다.

  • PDF

An Analysis of Korean Dependency Relation by Homograph Disambiguation (동형이의어 분별에 의한 한국어 의존관계 분석)

  • Kim, Hong-Soon;Ock, Cheol-Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.6
    • /
    • pp.219-230
    • /
    • 2014
  • An analysis of dependency relation is a job that determines the governor and the dependent between words in sentence. The dependency relation of predicate is established by patterns and selectional restriction of subcategorization of the predicate. This paper proposes a method of analysis of Korean dependency relation using homograph predicate disambiguated in morphology analysis phase. The disambiguated homograph predicates has each different pattern. Especially reusing a stage transition training dictionary used during tagging POS and homograph, we propose a method of fixing the dependency relation of {noun+postposition, predicate}, and we analyze the accuracy and an effect of homograph for analysis of dependency relation. We used the Sejong Phrase Structured Corpus for experiment. We transformed the phrase structured corpus to dependency relation structure and tagged homograph. From the experiment, the accuracy of dependency relation by disambiguating homograph is 80.38%, the accuracy is increased by 0.42% compared with one of undisambiguated homograph. The Z-values in statistical hypothesis testing with significance level 1% is ${\mid}Z{\mid}=4.63{\geq}z_{0.01}=2.33$. So we can conclude that the homograph affects on analysis of dependency relation, and the stage transition training dictionary used in tagging POS and homograph affects 7.14% on the accuracy of dependency relation.

Part-Of-Speech Tagging and the Recognition of the Korean Unknown-words Based on Machine Learning (기계학습에 기반한 한국어 미등록 형태소 인식 및 품사 태깅)

  • Choi, Maeng-Sik;Kim, Hark-Soo
    • The KIPS Transactions:PartB
    • /
    • v.18B no.1
    • /
    • pp.45-50
    • /
    • 2011
  • Unknown morpheme errors in Korean morphological analysis are divided into two types: The one is the errors that a morphological analyzer entirely fails to return any morpheme sequences, and the other is the errors that a morphological analyzer returns incorrect combinations of known morphemes. Most previous unknown morpheme estimation techniques have been focused on only the former errors. This paper proposes a unknown morpheme estimation method which can handle both of the unknown morpheme errors. The proposed method detects Eojeols (Korean spacing units) that may include unknown morpheme errors using SVM (Support Vector Machine). Then, using CRFs (Conditional Random Fields), it segments morphemes from the detected Eojeols and annotates the segmented morphemes with new POS tags. In the experiments, the proposed method outperformed the conventional method based on the longest matching of functional words. Based on the experimental results, we knew that the second type errors should be dealt with in order to increase the performance of Korean morphological analysis.

An Improved Homonym Disambiguation Model based on Bayes Theory (Bayes 정리에 기반한 개선된 동형이의어 분별 모텔)

  • 김창환;이왕우
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.12
    • /
    • pp.1581-1590
    • /
    • 2001
  • This paper asserted more developmental model of WSD(word sense disambiguation) than J. Hur(2000)'s WSD model. This model suggested an improved statistical homonym disambiguation Model based on Bayes Theory. This paper using semantic information(co-occurrence data) obtained from definitions of part of speech(POS) tagged UMRD-S(Ulsan university Machine Readable Dictionary(Semantic Tagged)). we extracted semantic features in the context as nouns, predicates and adverbs from the definitions in the korean dictionary. In this research, we make an experiment with the accuracy of WSD system about major nine homonym nouns and new seven homonym predicates supplementary. The inner experimental result showed average accuracy of 98.32% with regard to the most Nine homonym nouns and 99.53% for the Seven homonym predicates. An Addition, we save test on Korean Information Base and ETRI's POS tagged corpus. This external experimental result showed average accuracy of 84.42% with regard to the most Nine nouns over unsupervised learning sentences from Korean Information Base and ETRI Corpus, 70.81 % accuracy rate for the Seven predicates from Sejong Project phrase part tagging corpus (3.5 million phrases) too.

  • PDF

Light Weight Korean Morphological Analysis Using Left-longest-match-preference model and Hidden Markov Model (좌최장일치법과 HMM을 결합한 경량화된 한국어 형태소 분석)

  • Kang, Sangwoo;Yang, Jaechul;Seo, Jungyun
    • Korean Journal of Cognitive Science
    • /
    • v.24 no.2
    • /
    • pp.95-109
    • /
    • 2013
  • With the rapid evolution of the personal device environment, the demand for natural language applications is increasing. This paper proposes a morpheme segmentation and part-of-speech tagging model, which provides the first step module of natural language processing for many languages; the model is designed for mobile devices with limited hardware resources. To reduce the number of morpheme candidates in morphological analysis, the proposed model uses a method that adds highly possible morpheme candidates to the original outputs of a conventional left-longest-match-preference method. To reduce the computational cost and memory usage, the proposed model uses a method that simplifies the process of calculating the observation probability of a word consisting of one or more morphemes in a conventional hidden Markov model.

  • PDF

New Text Steganography Technique Based on Part-of-Speech Tagging and Format-Preserving Encryption

  • Mohammed Abdul Majeed;Rossilawati Sulaiman;Zarina Shukur
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.170-191
    • /
    • 2024
  • The transmission of confidential data using cover media is called steganography. The three requirements of any effective steganography system are high embedding capacity, security, and imperceptibility. The text file's structure, which makes syntax and grammar more visually obvious than in other media, contributes to its poor imperceptibility. Text steganography is regarded as the most challenging carrier to hide secret data because of its insufficient redundant data compared to other digital objects. Unicode characters, especially non-printing or invisible, are employed for hiding data by mapping a specific amount of secret data bits in each character and inserting the character into cover text spaces. These characters are known with limited spaces to embed secret data. Current studies that used Unicode characters in text steganography focused on increasing the data hiding capacity with insufficient redundant data in a text file. A sequential embedding pattern is often selected and included in all available positions in the cover text. This embedding pattern negatively affects the text steganography system's imperceptibility and security. Thus, this study attempts to solve these limitations using the Part-of-speech (POS) tagging technique combined with the randomization concept in data hiding. Combining these two techniques allows inserting the Unicode characters in randomized patterns with specific positions in the cover text to increase data hiding capacity with minimum effects on imperceptibility and security. Format-preserving encryption (FPE) is also used to encrypt a secret message without changing its size before the embedding processes. By comparing the proposed technique to already existing ones, the results demonstrate that it fulfils the cover file's capacity, imperceptibility, and security requirements.

POS-Tagging Model Combining Rules and Word Probability (규칙과 어절 확률을 이용한 혼합 품사 태깅 모델)

  • Hwang, Myeong-Jin;Kang, Mi-Young;Kwon, Hyuk-Chul
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10b
    • /
    • pp.11-15
    • /
    • 2006
  • 본 논문은, 긍정적 가중치와 부정적 가중치를 통해 표현되는 규칙에 기반을 둔 품사 태깅 모델과, 형태 소 unigram 정보와 어절 내의 카테고리 패턴에 기반하여 어절 확률을 추정하는 품사 태깅 모델의 장점을 취하고 단점을 보완할 수 있는 혼합 품사 태깅 모델을 제안한다. 이 혼합 모델은 먼저, 규칙에 기반한 품사 태깅을 적용한 후, 규칙이 해결하지 못한 결과에 대해서 통계적인 기법을 사용하여 품사 태깅을 한다. 본 연구는 어절 내 카테고리 패턴정보에 따른 파라미터 set과 형태소 unigram만을 이용해 어절 확률을 계산해 내므로 다른 통계기반 접근방법에서와는 달리 작은 크기의 통계사전만을 필요로 하며, 카테고리 패턴 정보를 사용함으로써 통계기반 접근 방법의 가장 큰 문제점인 data sparseness 문제 또한 줄일 수 있다는 이점이 있다. 특히, 본 논문에서 사용할 통계 모델은 어절 확률에 기반을 두고 있기 때문에 한국어의 특성을 잘 반영할 수 있다. 본 논문에서 제안한 혼합 모델은 규칙이 적용된 후에도 후보열이 둘 이상 남아 오류로 반환되었던 어절 중 24%를 개선한다.

  • PDF

A Dynamic Link Model for Korean POS-Tagging (한국어 품사 태깅을 위한 다이내믹 링크 모델)

  • Hwang, Myeong-Jin;Kang, Mi-Young;Kwon, Hyuk-Chul
    • Annual Conference on Human and Language Technology
    • /
    • 2007.10a
    • /
    • pp.282-289
    • /
    • 2007
  • 통계를 이용한 품사 태깅에서는 자료부족 문제가 이슈가 된다. 한국어나 터키어와 같은 교착어는 어절(word)이 다수 형태소로 구성되어 있어서 자료부족 문제가 더 심각하다. 이러한 문제를 극복하고자 교착어 문장을 어절 열이 아니라 형태소의 열이라 가정한 연구도 있었으나, 어절 특성이 사라지기 때문에 파생에 의한 어절의 문법 범주 변화 등의 통계정보와 어절 간의 통계정보를 구하기 어렵다. 본 논문은 효율적인 어절 간 전이확률 계산 방법론을 고안함으로써 어절 단위의 정보를 유지하면서도 자료부족문제를 해결할 수 있는 확률 모델을 제안한다. 즉, 한국어의 형태통사적인 특성을 고려하면 앞 어절의 마지막 형태소와 함께 뒤 어절의 처음 혹은 끝 형태소-즉 두 개의 어절 간 전이 링크만으로도 어절 간 전이확률 계산 시 필요한 대부분 정보를 얻을 수 있고, 문맥에 따라 두 링크 중 하나만 필요하다는 관찰을 토대로 규칙을 이용해 두전이링크 중 하나를 선택해 전이확률 계산에 사용하는 '다이내믹 링크 모델'을 제안한다. 형태소 품사 bi-gram만을 사용하는 이 모델은 실험 말뭉치에 대해 96.60%의 정확도를 보인다. 이는 같은 말뭉치에 대해 형태소 품사 tri-gram 등의 더 많은 문맥 정보를 사용하는 다른 모델을 평가했을 때와 대등한 성능이다.

  • PDF

Korean POS and Homonym Tagging System using HMM (HMM을 이용한 한국어 품사 및 동형이의어 태깅 시스템)

  • Kim, Dong-Myoung;Bae, Young-Jun;Ock, Cheol-Young;Choi, Ho-Soep;Kim, Chang-Hwan
    • Annual Conference on Human and Language Technology
    • /
    • 2008.10a
    • /
    • pp.12-16
    • /
    • 2008
  • 기존의 자연언어처리 연구 중 품사 태깅과 동형이의어 태깅은 별개의 문제로 취급되었다. 그로 인해 두 문제를 해결하기 위한 모델 역시 서로 다른 모델을 사용하였다. 이에 본 논문은 품사 태깅 문제와 동형이의어 태깅 문제는 모두 문맥의 정보에 의존함에 착안하여 은닉마르코프모델을 이용하여 두 가지 문제를 해결하는 시스템을 구현하였다. 제안한 시스템은 품사 및 동형이의어 태깅된 세종 말뭉치 1100만여 어절에 대해 unigram과 bigram을 추출 하였고, unigram을 이용하여 어절의 생성확률 사전을 구축하고 bigram을 이용하여 전이확률 사전을 구축하였다. 구현된 시스템의 성능 확인을 위해 비학습 말뭉치 261,360 어절에 대해 실험하였고, 실험결과 품사 태깅 99.74%, 동형이의어 태깅 97.41%, 품사 및 동형이의어 태깅 97.78%의 정확률을 보였다.

  • PDF

Two Statistical Models for Automatic Word Spacing of Korean Sentences (한글 문장의 자동 띄어쓰기를 위한 두 가지 통계적 모델)

  • 이도길;이상주;임희석;임해창
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.358-371
    • /
    • 2003
  • Automatic word spacing is a process of deciding correct boundaries between words in a sentence including spacing errors. It is very important to increase the readability and to communicate the accurate meaning of text to the reader. The previous statistical approaches for automatic word spacing do not consider the previous spacing state, and thus can not help estimating inaccurate probabilities. In this paper, we propose two statistical word spacing models which can solve the problem of the previous statistical approaches. The proposed models are based on the observation that the automatic word spacing is regarded as a classification problem such as the POS tagging. The models can consider broader context and estimate more accurate probabilities by generalizing hidden Markov models. We have experimented the proposed models under a wide range of experimental conditions in order to compare them with the current state of the art, and also provided detailed error analysis of our models. The experimental results show that the proposed models have a syllable-unit accuracy of 98.33% and Eojeol-unit precision of 93.06% by the evaluation method considering compound nouns.