• Title/Summary/Keyword: Disambiguation Method

Search Result 65, Processing Time 0.019 seconds

A Framework for WordNet-based Word Sense Disambiguation (워드넷 기반의 단어 중의성 해소 프레임워크)

  • Ren, Chulan;Cho, Sehyeong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.4
    • /
    • pp.325-331
    • /
    • 2013
  • This paper a framework and method for resolving word sense disambiguation and present the results. In this work, WordNet is used for two different purposes: one as a dictionary and the other as an ontology, containing the hierarchical structure, representing hypernym-hyponym relations. The advantage of this approach is twofold. First, it provides a very simple method that is easily implemented. Second, we do not suffer from the lack of large corpus data which would have been necessary in a statistical method. In the future this can be extended to incorporate other relations, such as synonyms, meronyms, and antonyms.

A Study on Statistical Feature Selection with Supervised Learning for Word Sense Disambiguation (단어 중의성 해소를 위한 지도학습 방법의 통계적 자질선정에 관한 연구)

  • Lee, Yong-Gu
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.22 no.2
    • /
    • pp.5-25
    • /
    • 2011
  • This study aims to identify the most effective statistical feature selecting method and context window size for word sense disambiguation using supervised methods. In this study, features were selected by four different methods: information gain, document frequency, chi-square, and relevancy. The result of weight comparison showed that identifying the most appropriate features could improve word sense disambiguation performance. Information gain was the highest. SVM classifier was not affected by feature selection and showed better performance in a larger feature set and context size. Naive Bayes classifier was the best performance on 10 percent of feature set size. kNN classifier on under 10 percent of feature set size. When feature selection methods are applied to word sense disambiguation, combinations of a small set of features and larger context window size, or a large set of features and small context windows size can make best performance improvements.

A Study on Optimization of Support Vector Machine Classifier for Word Sense Disambiguation (단어 중의성 해소를 위한 SVM 분류기 최적화에 관한 연구)

  • Lee, Yong-Gu
    • Journal of Information Management
    • /
    • v.42 no.2
    • /
    • pp.193-210
    • /
    • 2011
  • The study was applied to context window sizes and weighting method to obtain the best performance of word sense disambiguation using support vector machine. The context window sizes were used to a 3-word, sentence, 50-bytes, and document window around the targeted word. The weighting methods were used to Binary, Term Frequency(TF), TF ${\times}$ Inverse Document Frequency(IDF), and Log TF ${\times}$ IDF. As a result, the performance of 50-bytes in the context window size was best. The Binary weighting method showed the best performance.

Word Sense Disambiguation using Korean Word Space Model (한국어 단어 공간 모델을 이용한 단어 의미 중의성 해소)

  • Park, Yong-Min;Lee, Jae-Sung
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.6
    • /
    • pp.41-47
    • /
    • 2012
  • Various Korean word sense disambiguation methods have been proposed using small scale of sense-tagged corpra and dictionary definitions to calculate entropy information, conditional probability, mutual information and etc. for each method. This paper proposes a method using Korean Word Space model which builds word vectors from a large scale of sense-tagged corpus and disambiguates word senses with the similarity calculation between the word vectors. Experiment with Sejong morph sense-tagged corpus showed 94% precision for 200 sentences(583 word types), which is much superior to the other known methods.

An Experimental Study on an Effective Word Sense Disambiguation Model Based on Automatic Sense Tagging Using Dictionary Information (사전 정보를 이용한 단어 중의성 해소 모형에 관한 실험적 연구)

  • Lee, Yong-Gu;Chung, Young-Mee
    • Journal of the Korean Society for information Management
    • /
    • v.24 no.1 s.63
    • /
    • pp.321-342
    • /
    • 2007
  • This study presents an effective word sense disambiguation model that does not require manual sense tagging Process by automatically tagging the right sense using a machine-readable and the collocation co-occurrence-based methods. The dictionary information-based method that applied multiple feature selection showed the tagging accuracy of 70.06%, and the collocation co-occurrence-based method 56.33%. The sense classifier using the dictionary information-based tagging method showed the classification accuracy of 68.11%, and that using the collocation co-occurrence-based tagging method 62.09% The combined 1a99ing method applying data fusion technique achieved a greater performance of 76.09% resulting in the classification accuracy of 76.16%.

Word Sense Disambiguation using Meaning Groups (의미그룹을 이용한 단어 중의성 해소)

  • Kim, Eun-Jin;Lee, Soo-Won
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.6
    • /
    • pp.747-751
    • /
    • 2010
  • This paper proposes the method that increases the accuracy for tagging word meaning by creating sense tagged data automatically using machine readable dictionaries. The concept of meaning group is applied here, where the meaning group for each meaning of a target word consists of neighbor words of the target word. To enhance the tagging accuracy, the notion of concentration is used for the weight of each word in a meaning group. The tagging result in SENSEVAL-2 data shows that accuracy of the proposed method is better than that of existing ones.

Generalization of Ontology Instances Based on WordNet and Google (워드넷과 구글에 기반한 온톨로지 개체의 일반화)

  • Kang, Sin-Jae;Kang, In-Su
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.3
    • /
    • pp.363-370
    • /
    • 2009
  • In order to populate ontology, this paper presents a generalization method of ontology instances, extracted from texts and web pages, by using unsupervised learning techniques for word sense disambiguation, which uses open APIs and lexical resources such as Google and WordNet. According to the experimental results, our method achieved a 15.8% improvement over the previous research.

A Study on Word Sense Disambiguation Using Bidirectional Recurrent Neural Network for Korean Language

  • Min, Jihong;Jeon, Joon-Woo;Song, Kwang-Ho;Kim, Yoo-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.4
    • /
    • pp.41-49
    • /
    • 2017
  • Word sense disambiguation(WSD) that determines the exact meaning of homonym which can be used in different meanings even in one form is very important to understand the semantical meaning of text document. Many recent researches on WSD have widely used NNLM(Neural Network Language Model) in which neural network is used to represent a document into vectors and to analyze its semantics. Among the previous WSD researches using NNLM, RNN(Recurrent Neural Network) model has better performance than other models because RNN model can reflect the occurrence order of words in addition to the word appearance information in a document. However, since RNN model uses only the forward order of word occurrences in a document, it is not able to reflect natural language's characteristics that later words can affect the meanings of the preceding words. In this paper, we propose a WSD scheme using Bidirectional RNN that can reflect not only the forward order but also the backward order of word occurrences in a document. From the experiments, the accuracy of the proposed model is higher than that of previous method using RNN. Hence, it is confirmed that bidirectional order information of word occurrences is useful for WSD in Korean language.

Noun Sense Disambiguation Based-on Corpus and Conceptual Information (말뭉치와 개념정보를 이용한 명사 중의성 해소 방법)

  • 이휘봉;허남원;문경희;이종혁
    • Korean Journal of Cognitive Science
    • /
    • v.10 no.2
    • /
    • pp.1-10
    • /
    • 1999
  • This paper proposes a noun sense disambiguation method based-on corpus and conceptual information. Previous research has restricted the use of linguistic knowledge to the lexical level. Since knowledge extracted from corpus is stored in words themselves, the methods requires a large amount of space for the knowledge with low recall rate. On the contrary, we resolve noun sense ambiguity by using concept co-occurrence information extracted from an automatically sense-tagged corpus. In one experimental evaluation it achieved, on average, a precision of 82.4%, which is an improvement of the baseline by 14.6%. considering that the test corpus is completely irrelevant to the learning corpus, this is a promising result.

  • PDF

Word Sense Disambiguation based on Concept Learning with a focus on the Lowest Frequency Words (저빈도어를 고려한 개념학습 기반 의미 중의성 해소)

  • Kim Dong-Sung;Choe Jae-Woong
    • Language and Information
    • /
    • v.10 no.1
    • /
    • pp.21-46
    • /
    • 2006
  • This study proposes a Word Sense Disambiguation (WSD) algorithm, based on concept learning with special emphasis on statistically meaningful lowest frequency words. Previous works on WSD typically make use of frequency of collocation and its probability. Such probability based WSD approaches tend to ignore the lowest frequency words which could be meaningful in the context. In this paper, we show an algorithm to extract and make use of the meaningful lowest frequency words in WSD. Learning method is adopted from the Find-Specific algorithm of Mitchell (1997), according to which the search proceeds from the specific predefined hypothetical spaces to the general ones. In our model, this algorithm is used to find contexts with the most specific classifiers and then moves to the more general ones. We build up small seed data and apply those data to the relatively large test data. Following the algorithm in Yarowsky (1995), the classified test data are exhaustively included in the seed data, thus expanding the seed data. However, this might result in lots of noise in the seed data. Thus we introduce the 'maximum a posterior hypothesis' based on the Bayes' assumption to validate the noise status of the new seed data. We use the Naive Bayes Classifier and prove that the application of Find-Specific algorithm enhances the correctness of WSD.

  • PDF