• Title/Summary/Keyword: Sense disambiguation

Search Result 112, Processing Time 0.022 seconds

Word Sense Disambiguation Using Embedded Word Space

  • Kang, Myung Yun;Kim, Bogyum;Lee, Jae Sung
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.1
    • /
    • pp.32-38
    • /
    • 2017
  • Determining the correct word sense among ambiguous senses is essential for semantic analysis. One of the models for word sense disambiguation is the word space model which is very simple in the structure and effective. However, when the context word vectors in the word space model are merged into sense vectors in a sense inventory, they become typically very large but still suffer from the lexical scarcity. In this paper, we propose a word sense disambiguation method using word embedding that makes the sense inventory vectors compact and efficient due to its additive compositionality. Results of experiments with a Korean sense-tagged corpus show that our method is very effective.

Translation Disambiguation Based on 'Word-to-Sense and Sense-to-Word' Relationship (`단어-의미 의미-단어` 관계에 기반한 번역어 선택)

  • Lee Hyun-Ah
    • The KIPS Transactions:PartB
    • /
    • v.13B no.1 s.104
    • /
    • pp.71-76
    • /
    • 2006
  • To obtain a correctly translated sentence in a machine translation system, we must select target words that not only reflect an appropriate meaning in a source sentence but also make a fluent sentence in a target language. This paper points out that a source language word has various senses and each sense can be mapped into multiple target words, and proposes a new translation disambiguation method based on this 'word-to-sense and sense-to-word' relationship. In my method target words are chosen through disambiguation of a source word sense and selection of a target word. Most of translation disambiguation methods are based on a 'word-to-word' relationship that means they translate a source word directly into a target wort so they require complicate knowledge sources that directly link a source words to target words, which are hard to obtain like bilingual aligned corpora. By combining two sub-problems for each language, knowledge for translation disambiguation can be automatically extracted from knowledge sources for each language that are easy to obtain. In addition, disambiguation results satisfy both fidelity and intelligibility because selected target words have correct meaning and generate naturally composed target sentences.

Word sense disambiguation using dynamic sized context and distance weighting (가변 크기 문맥과 거리가중치를 이용한 동형이의어 중의성 해소)

  • Lee, Hyun Ah
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.38 no.4
    • /
    • pp.444-450
    • /
    • 2014
  • Most researches on word sense disambiguation have used static sized context regardless of sentence patterns. This paper proposes to use dynamic sized context considering sentence patterns and distance between words for word sense disambiguation. We evaluated our system 12 words in 32,735sentences with Sejong POS and sense tagged corpus, and dynamic sized context showed 92.2% average accuracy for predicates, which is better than accuracy of static sized context.

Comparison Thai Word Sense Disambiguation Method

  • Modhiran, Teerapong;Kruatrachue, Boontee;Supnithi, Thepchai
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1307-1312
    • /
    • 2004
  • Word sense disambiguation is one of the most important problems in natural language processing research topics such as information retrieval and machine translation. Many approaches can be employed to resolve word ambiguity with a reasonable degree of accuracy. These strategies are: knowledge-based, corpus-based, and hybrid-based. This paper pays attention to the corpus-based strategy. The purpose of this paper is to compare three famous machine learning techniques, Snow, SVM and Naive Bayes in Word-Sense Disambiguation on Thai language. 10 ambiguous words are selected to test with word and POS features. The results show that SVM algorithm gives the best results in solving of Thai WSD and the accuracy rate is approximately 83-96%.

  • PDF

Verb Sense Disambiguation using Subordinating Case Information (종속격 정보를 적용한 동사 의미 중의성 해소)

  • Park, Yo-Sep;Shin, Joon-Choul;Ock, Cheol-Young;Park, Hyuk-Ro
    • The KIPS Transactions:PartB
    • /
    • v.18B no.4
    • /
    • pp.241-248
    • /
    • 2011
  • Homographs can have multiple senses. In order to understand the meaning of a sentence, it is necessary to identify which sense isused for each word in the sentence. Previous researches on this problem heavily relied on the word co-occurrence information. However, we noticed that in case of verbs, information about subordinating cases of verbs can be utilized to further improve the performance of word sense disambiguation. Different senses require different sets of subordinating cases. In this paper, we propose the verb sense disambiguation using subordinating case information. The case information acquire postposition features in Standard Korean Dictionary. Our experiment on 12 high-frequency verb homographs shows that adding case information can improve the performance of word sense disambiguation by 1.34%, from 97.3% to 98.7%. The amount of improvement may seem marginal, we think it is meaningful because the error ratio reduced to less than a half, from 2.7% to 1.3%.

A Method of Supervised Word Sense Disambiguation Using Decision Lists Based on Syntactic Clues (구문관계에 기반한 단서의 결정 리스트를 이용한 지도학습 어의 애매성 해결 방법)

  • Kim, Kweon-Yang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.2
    • /
    • pp.125-130
    • /
    • 2003
  • This paper presents a simple method of supervised word sense disambiguation using decision lists based on syntactic clues. This approach focuses on the syntactic relations between the given ambiguous word and surrounding words in context for resolving a given sense ambiguity. By identifying and utilizing only the single best disambiguation evidence in a given context instead of combining a set of clues, the algorithm decides the correct sense. Experiments with 10 Korean verbs show that adding syntactic clues to a basic set of surrounding context words improves 33% higher performance than baseline accuracy. In addition, our method using decision lists is 3% higher than a method using integration of all disambiguation evidences.

A Study on Statistical Feature Selection with Supervised Learning for Word Sense Disambiguation (단어 중의성 해소를 위한 지도학습 방법의 통계적 자질선정에 관한 연구)

  • Lee, Yong-Gu
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.22 no.2
    • /
    • pp.5-25
    • /
    • 2011
  • This study aims to identify the most effective statistical feature selecting method and context window size for word sense disambiguation using supervised methods. In this study, features were selected by four different methods: information gain, document frequency, chi-square, and relevancy. The result of weight comparison showed that identifying the most appropriate features could improve word sense disambiguation performance. Information gain was the highest. SVM classifier was not affected by feature selection and showed better performance in a larger feature set and context size. Naive Bayes classifier was the best performance on 10 percent of feature set size. kNN classifier on under 10 percent of feature set size. When feature selection methods are applied to word sense disambiguation, combinations of a small set of features and larger context window size, or a large set of features and small context windows size can make best performance improvements.

Word Sense Disambiguation of Predicate using Sejong Electronic Dictionary and KorLex (세종 전자사전과 한국어 어휘의미망을 이용한 용언의 어의 중의성 해소)

  • Kang, Sangwook;Kim, Minho;Kwon, Hyuk-chul;Jeon, SungKyu;Oh, Juhyun
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.7
    • /
    • pp.500-505
    • /
    • 2015
  • The Sejong Electronic(machine readable) Dictionary, which was developed by the 21 century Sejong Plan, contains a systematic of immanence information of Korean words. It helps in solving the problem of electronical presentation of a general text dictionary commonly used. Word sense disambiguation problems can also be solved using the specific information available in the Sejong Electronic Dictionary. However, the Sejong Electronic Dictionary has a limitation of suggesting structure of sentences and selection-restricted nouns. In this paper, we discuss limitations of word sense disambiguation by using subcategorization information as suggested by the Sejong Electronic Dictionary and generalize selection-restricted noun of argument using Korean Lexico-semantic network.

Word Sense Disambiguation using Korean Word Space Model (한국어 단어 공간 모델을 이용한 단어 의미 중의성 해소)

  • Park, Yong-Min;Lee, Jae-Sung
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.6
    • /
    • pp.41-47
    • /
    • 2012
  • Various Korean word sense disambiguation methods have been proposed using small scale of sense-tagged corpra and dictionary definitions to calculate entropy information, conditional probability, mutual information and etc. for each method. This paper proposes a method using Korean Word Space model which builds word vectors from a large scale of sense-tagged corpus and disambiguates word senses with the similarity calculation between the word vectors. Experiment with Sejong morph sense-tagged corpus showed 94% precision for 200 sentences(583 word types), which is much superior to the other known methods.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.