• Title/Summary/Keyword: 품사부착

Search Result 84, Processing Time 0.024 seconds

A Homonym Disambiguation System Based on Statistical Model Using Sense Category and Distance Weights (의미범주 및 거리 가중치를 고려한 통계기반 동형이의어 분별 시스템)

  • Kim, Jun-Su;Kim, Chang-Hwan;Lee, Wang-Woo;Lee, Soo-Dong;Ock, Cheol-Young
    • Annual Conference on Human and Language Technology
    • /
    • 2001.10d
    • /
    • pp.487-493
    • /
    • 2001
  • 본 논문에서는 Bayes 정리를 적용한 통계기반 동형이의어 분별 시스템에 대한 외부실험 결과를 분석하여, 정확률 향상을 위한 의미범주 가중치 및 인접 어절에 대한 거리 가중치 모델을 제시한다. 의미 분별된 사전 뜻풀이말 코퍼스(120만 어절)에서 구축된 의미정보를 이용한 통계기반 동형이의어 분별 시스템을 사전 뜻풀이말 문장에 출현하는 동형이의어 의미 분별에 적용한 결과 상위 고빈도 200개의 동형이의어에 대해 평균 98.32% 정확률을 보였다. 내부 실험에 사용된 200개의 동형이의어 중 49개(체언 31개, 용언 18개)를 선별하여 이들 동형이의어를 포함하고 있는 50,703개의 문장을 세종계획 품사 부착 코퍼스(350만 어절)에서 추출하여 외부 실험을 하였다. 분별하고자 하는 동형이의어의 앞/뒤 5어절에 대해 의미범주 및 거리 가중치를 부여한 실험 결과 기존 통계기반 분별 모델 보다 2.93% 정확률이 향상되었다.

  • PDF

A Measurement of Lexical Relationship for Concept Network Based on Semantic Features (의미속성 기반의 개념망을 위한 어휘 연관도 측정)

  • Ock, Eun-Joo;Lee, Wang-Woo;Lee, Soo-Dong;Ock, Cheol-Young
    • Annual Conference on Human and Language Technology
    • /
    • 2001.10d
    • /
    • pp.146-154
    • /
    • 2001
  • 본 논문에서는 개념망 구축을 위해 사전 뜻풀이말에서 추출 가능한 의미속성의 분포 정보를 기반으로 어휘 연관도를 측정하고자 한다. 먼저 112,000여 개의 사전 뜻풀이말을 대상으로 품사 태그와 의미 태그가 부여된 코퍼스에서 의미속성을 추출한다. 추출 가능한 의미속성은 체언류, 부사류, 용언류 등이 있는데 본 논문에서는 일차적으로 명사류와 수식 관계에 있는 용언류 중 관형형 전성어미('ㄴ/은/는')가 부착된 것을 대상으로 한다. 추출된 공기쌍 45,000여 개를 대상으로 정제 작업을 거쳐 정보이론의 상호 정보량(MI)을 이용하여 명사류와 용언류의 연관도를 측정한다. 한편, 자료의 희귀성을 완화하기 위해 수식 관계의 명사류와 용언류는 기초어휘를 중심으로 유사어 집합으로 묶어서 작업을 하였다. 이러한 의미속성의 분포 정보를 통해 측정된 어휘 연관도는 의미속성의 공유 정도를 계산하여 개념들간에 계층구조를 구축하는 데 이용할 수 있다.

  • PDF

Probabilistic Segmentation and Tagging of Unknown Words (확률 기반 미등록 단어 분리 및 태깅)

  • Kim, Bogyum;Lee, Jae Sung
    • Journal of KIISE
    • /
    • v.43 no.4
    • /
    • pp.430-436
    • /
    • 2016
  • Processing of unknown words such as proper nouns and newly coined words is important for a morphological analyzer to process documents in various domains. In this study, a segmentation and tagging method for unknown Korean words is proposed for the 3-step probabilistic morphological analysis. For guessing unknown word, it uses rich suffixes that are attached to open class words, such as general nouns and proper nouns. We propose a method to learn the suffix patterns from a morpheme tagged corpus, and calculate their probabilities for unknown open word segmentation and tagging in the probabilistic morphological analysis model. Results of the experiment showed that the performance of unknown word processing is greatly improved in the documents containing many unregistered words.

Identifying Optimum Features for Abbreviation Disambiguation in Biomedical Domain (생의학 도메인에서 약어 중의성 해결을 위한 최적 자질의 규명)

  • Lim, Ho-Gun;Seo, Hee-Cheol;Kim, Seon-Ho;Rim, Hae-Chang
    • Annual Conference on Human and Language Technology
    • /
    • 2004.10d
    • /
    • pp.173-180
    • /
    • 2004
  • 생의학 도메인에서 약어 중의성 해결이란 생의학 문서에 나타난 약어의 원래 형태(long form)를 판별하는 작업이다. 본 논문은 생의학 도메인에서 약어 중의성 해결에 적합한 자질들을 실험적으로 탐색하는데 목적이 있다. 이를 위해서 약어 중의성 해결에 사용할 문맥을 전역 문맥(topical context)과 지역 문맥(local context)으로 구분하고, 각각의 문맥에서 스테밍(stemming), 불용어 제거, 품사 부착 등의 과정을 통해서 다양한 자질들을 고려하도록 한다. 생의학 도메인에서 약어 중의성 해결을 위한 실험 자료의 부족을 해결하기 위해서, 학습 자료와 평가 자료를 자동으로 구축했으며, 평가를 위한 약어로는 기존 연구에서 사용된 두 가지 약어 목록을 사용했다. 또한 단순 베이지언 모델(Naive Bayesian Model)을 이용해서 각 자질들의 유용성을 평가하였다 실험 결과, 전역 문맥이 지역 문맥보다 더 좋은 성능을 보였으며, 전역 문맥에서는 불용어만을 제거한 경우가 각각의 평가 자료에서 94.2%와 96.2%로 가장 좋은 결과를 보였으며, 전역 문맥과 지역 문맥을 함께 사용하는 경우에 각각의 평가 자료에서 1.8%와 0.3%의 성능 향상이 있었다.

  • PDF

Translation Dictionary Tuning System By using of Auto-Evaluation Method (자동 평가 방법을 이용한 번역 지식 튜닝 시스템)

  • Park, Eun-Jin;Jin, Yun;Kwon, Oh-Woog;Wu, Ying-Shun;Kim, Young-Kil
    • Annual Conference on Human and Language Technology
    • /
    • 2011.10a
    • /
    • pp.147-150
    • /
    • 2011
  • 본 논문에서는 병렬 말뭉치에서 오류가 있을 것으로 추정되는 문장을 자동 추출하여, 다수의 번역 사전 구축 작업자가 자동 번역시스템을 직접 사용하면서 번역 사전을 튜닝하는 방법에 대하여 제안하고자 한다. 작업자는 병렬 말뭉치의 대역문을 이용하여 자동 번역 결과의 BLEU를 측정하고, 사전 수정 전과 후의 BLEU 차이를 정량적으로 제시해 줌으로써 양질의 번역 사전을 구축하도록 하였다. 대량의 번역 사전이 이미 구축된 자동 번역시스템에서 추가적인 성능향상을 위해 대량의 말뭉치에서 미등록어, 번역패턴 등을 추출하여, 대량으로 구축하는 기존 방법에 비해 사전 구축 부작용이 적으며, 자동번역 성능향상에 더 기여하는 것을 실험을 통해 증명하였다. 이를 위해 본 논문에서는 중한 자동 번역시스템을 대상으로, 중국어 문장 2,193문장에 대해, 사전 구축 작업자 2명이 2주간 튜닝한 결과와 15만 말뭉치에서 추출한 미등록어 후보 2만 엔트리를 3명의 사전 구축 작업자가 미등록어 선별, 품사 및 대역어 부착한 결과 7,200 엔트리를 대상으로 자동평가를 실시하였다. 실험결과 미등록어 추가에 의한 BLEU 성능향상은 +3인데 반해, 약 2,000문장 튜닝 후 BLEU를 +12 향상시켰다.

  • PDF

Coreference Resolution for Korean using Mention Pair with SVM (SVM 기반의 멘션 페어 모델을 이용한 한국어 상호참조해결)

  • Choi, Kyoung-Ho;Park, Cheon-Eum;Lee, Changki
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.4
    • /
    • pp.333-337
    • /
    • 2015
  • In this paper, we suggest a Coreference Resolution system for Korean using Mention Pair with SVM. The system introduced in this paper, also be able to extract Mention from document which is including automatically tagged name entity information, dependency trees and POS tags. We also built a corpus, including 214 documents with Coreference tags, referencing online news and Wikipedia for training the system and testing the system's performance. The corpus had 14 documents from online news, along with 200 question-and-answer documents from Wikipedia. When we tested the system by corpus, the performance of the system was extracted by MUC-F1 55.68%, B-cube-F1 57.19%, and CEAFE-F1 61.75%.

PPEditor: Semi-Automatic Annotation Tool for Korean Dependency Structure (PPEditor: 한국어 의존구조 부착을 위한 반자동 말뭉치 구축 도구)

  • Kim Jae-Hoon;Park Eun-Jin
    • The KIPS Transactions:PartB
    • /
    • v.13B no.1 s.104
    • /
    • pp.63-70
    • /
    • 2006
  • In general, a corpus contains lots of linguistic information and is widely used in the field of natural language processing and computational linguistics. The creation of such the corpus, however, is an expensive, labor-intensive and time-consuming work. To alleviate this problem, annotation tools to build corpora with much linguistic information is indispensable. In this paper, we design and implement an annotation tool for establishing a Korean dependency tree-tagged corpus. The most ideal way is to fully automatically create the corpus without annotators' interventions, but as a matter of fact, it is impossible. The proposed tool is semi-automatic like most other annotation tools and is designed to edit errors, which are generated by basic analyzers like part-of-speech tagger and (partial) parser. We also design it to avoid repetitive works while editing the errors and to use it easily and friendly. Using the proposed annotation tool, 10,000 Korean sentences containing over 20 words are annotated with dependency structures. For 2 months, eight annotators have worked every 4 hours a day. We are confident that we can have accurate and consistent annotations as well as reduced labor and time.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Two Statistical Models for Automatic Word Spacing of Korean Sentences (한글 문장의 자동 띄어쓰기를 위한 두 가지 통계적 모델)

  • 이도길;이상주;임희석;임해창
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.358-371
    • /
    • 2003
  • Automatic word spacing is a process of deciding correct boundaries between words in a sentence including spacing errors. It is very important to increase the readability and to communicate the accurate meaning of text to the reader. The previous statistical approaches for automatic word spacing do not consider the previous spacing state, and thus can not help estimating inaccurate probabilities. In this paper, we propose two statistical word spacing models which can solve the problem of the previous statistical approaches. The proposed models are based on the observation that the automatic word spacing is regarded as a classification problem such as the POS tagging. The models can consider broader context and estimate more accurate probabilities by generalizing hidden Markov models. We have experimented the proposed models under a wide range of experimental conditions in order to compare them with the current state of the art, and also provided detailed error analysis of our models. The experimental results show that the proposed models have a syllable-unit accuracy of 98.33% and Eojeol-unit precision of 93.06% by the evaluation method considering compound nouns.

A Robust Pattern-based Feature Extraction Method for Sentiment Categorization of Korean Customer Reviews (강건한 한국어 상품평의 감정 분류를 위한 패턴 기반 자질 추출 방법)

  • Shin, Jun-Soo;Kim, Hark-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.12
    • /
    • pp.946-950
    • /
    • 2010
  • Many sentiment categorization systems based on machine learning methods use morphological analyzers in order to extract linguistic features from sentences. However, the morphological analyzers do not generally perform well in a customer review domain because online customer reviews include many spacing errors and spelling errors. These low performances of the underlying systems lead to performance decreases of the sentiment categorization systems. To resolve this problem, we propose a feature extraction method based on simple longest matching of Eojeol (a Korean spacing unit) and phoneme patterns. The two kinds of patterns are automatically constructed from a large amount of POS (part-of-speech) tagged corpus. Eojeol patterns consist of Eojeols including content words such as nouns and verbs. Phoneme patterns consist of leading consonant and vowel pairs of predicate words such as verbs and adjectives because spelling errors seldom occur in leading consonants and vowels. To evaluate the proposed method, we implemented a sentiment categorization system using a SVM (Support Vector Machine) as a machine learner. In the experiment with Korean customer reviews, the sentiment categorization system using the proposed method outperformed that using a morphological analyzer as a feature extractor.