• Title/Summary/Keyword: Word sense disambiguation

Search Result 104, Processing Time 0.022 seconds

An Evaluation of Translation Quality by Homograph Disambiguation in Korean-X Neural Machine Translation Systems (한-X 신경기계번역시스템에서 동형이의어 분별에 따른 변역질 평가)

  • Nguyen, Quang-Phuoc;Shin, Joon-Choul;Ock, Cheol-Young
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.504-509
    • /
    • 2018
  • Neural machine translation (NMT) has recently achieved the state-of-the-art performance. However, it is reported failing in the word sense disambiguation (WSD) for several popular language pairs. In this paper, we explore the extent to which NMT systems are able to disambiguate the Korean homographs. Homographs, words with different meanings but the same written form, cause the word choice problems for NMT systems. Consistent with the popular language pairs, we discover that NMT systems fail to translate Korean homographs correctly. We provide a Korean word sense disambiguation tool-UTagger to use for improvement of NMT's translation quality. We conducted translation experiments using Korean-English and Korean-Vietnamese language pairs. The experimental results show that UTagger can significantly improve the translation quality of NMT in terms of the BLEU, TER, and DLRATIO evaluation metrics.

  • PDF

A Study on Word Sense Disambiguation Using Bidirectional Recurrent Neural Network for Korean Language

  • Min, Jihong;Jeon, Joon-Woo;Song, Kwang-Ho;Kim, Yoo-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.4
    • /
    • pp.41-49
    • /
    • 2017
  • Word sense disambiguation(WSD) that determines the exact meaning of homonym which can be used in different meanings even in one form is very important to understand the semantical meaning of text document. Many recent researches on WSD have widely used NNLM(Neural Network Language Model) in which neural network is used to represent a document into vectors and to analyze its semantics. Among the previous WSD researches using NNLM, RNN(Recurrent Neural Network) model has better performance than other models because RNN model can reflect the occurrence order of words in addition to the word appearance information in a document. However, since RNN model uses only the forward order of word occurrences in a document, it is not able to reflect natural language's characteristics that later words can affect the meanings of the preceding words. In this paper, we propose a WSD scheme using Bidirectional RNN that can reflect not only the forward order but also the backward order of word occurrences in a document. From the experiments, the accuracy of the proposed model is higher than that of previous method using RNN. Hence, it is confirmed that bidirectional order information of word occurrences is useful for WSD in Korean language.

A Method of Supervised Word Sense Disambiguation Using Decision Lists Based on Syntactic Clues (구문관계에 기반한 단서의 결정 리스트를 이용한 지도학습 어의 애매성 해결 방법)

  • Kim, Kweon-Yang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.2
    • /
    • pp.125-130
    • /
    • 2003
  • This paper presents a simple method of supervised word sense disambiguation using decision lists based on syntactic clues. This approach focuses on the syntactic relations between the given ambiguous word and surrounding words in context for resolving a given sense ambiguity. By identifying and utilizing only the single best disambiguation evidence in a given context instead of combining a set of clues, the algorithm decides the correct sense. Experiments with 10 Korean verbs show that adding syntactic clues to a basic set of surrounding context words improves 33% higher performance than baseline accuracy. In addition, our method using decision lists is 3% higher than a method using integration of all disambiguation evidences.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Word Sense Classification Using Support Vector Machines (지지벡터기계를 이용한 단어 의미 분류)

  • Park, Jun Hyeok;Lee, Songwook
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.563-568
    • /
    • 2016
  • The word sense disambiguation problem is to find the correct sense of an ambiguous word having multiple senses in a dictionary in a sentence. We regard this problem as a multi-class classification problem and classify the ambiguous word by using Support Vector Machines. Context words of the ambiguous word, which are extracted from Sejong sense tagged corpus, are represented to two kinds of vector space. One vector space is composed of context words vectors having binary weights. The other vector space has vectors where the context words are mapped by word embedding model. After experiments, we acquired accuracy of 87.0% with context word vectors and 86.0% with word embedding model.

A Framework for WordNet-based Word Sense Disambiguation (워드넷 기반의 단어 중의성 해소 프레임워크)

  • Ren, Chulan;Cho, Sehyeong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.4
    • /
    • pp.325-331
    • /
    • 2013
  • This paper a framework and method for resolving word sense disambiguation and present the results. In this work, WordNet is used for two different purposes: one as a dictionary and the other as an ontology, containing the hierarchical structure, representing hypernym-hyponym relations. The advantage of this approach is twofold. First, it provides a very simple method that is easily implemented. Second, we do not suffer from the lack of large corpus data which would have been necessary in a statistical method. In the future this can be extended to incorporate other relations, such as synonyms, meronyms, and antonyms.

A Study on Optimization of Support Vector Machine Classifier for Word Sense Disambiguation (단어 중의성 해소를 위한 SVM 분류기 최적화에 관한 연구)

  • Lee, Yong-Gu
    • Journal of Information Management
    • /
    • v.42 no.2
    • /
    • pp.193-210
    • /
    • 2011
  • The study was applied to context window sizes and weighting method to obtain the best performance of word sense disambiguation using support vector machine. The context window sizes were used to a 3-word, sentence, 50-bytes, and document window around the targeted word. The weighting methods were used to Binary, Term Frequency(TF), TF ${\times}$ Inverse Document Frequency(IDF), and Log TF ${\times}$ IDF. As a result, the performance of 50-bytes in the context window size was best. The Binary weighting method showed the best performance.

A Word Sense Disambiguation Method with a Semantic Network (의미네트워크를 이용한 단어의미의 모호성 해결방법)

  • DingyulRa
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.2
    • /
    • pp.225-248
    • /
    • 1992
  • In this paper, word sense disambiguation methods utilizing a knowledge base based on a semantic network are introduced. The basic idea is to keep track of a set of paths in the knowledge base which correspond to the inctemental semantic interpretation of a input sentence. These paths are called the semantic paths. when the parser reads a word, the senses of this word which are not involved in any of the semantic paths are removed. Then the removal operation is propagated through the knowledge base to invoke the removal of the senses of other words that have been read before. This removal operation is called recusively as long as senses can be removed. This is called the recursive word sense removal. Concretion of a vague word's concept is one of the important word sense disambiguation methods. We introduce a method called the path adjustment that extends the conctetion operation. How to use semantic association or syntactic processing in coorporation with the above methods is also considered.

Word Sense Disambiguation of Predicate using Semi-supervised Learning and Sejong Electronic Dictionary (세종 전자사전과 준지도식 학습 방법을 이용한 용언의 어의 중의성 해소)

  • Kang, Sangwook;Kim, Minho;Kwon, Hyuk-chul;Oh, Jyhyun
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.2
    • /
    • pp.107-112
    • /
    • 2016
  • The Sejong Electronic(machine-readable) Dictionary, developed by the 21st century Sejong Plan, contains systematically organized information on Korean words. It helps to solve problems encountered in the electronic formatting of the still-commonly-used hard-copy dictionary. The Sejong Electronic Dictionary, however has a limitation relate to sentence structure and selection-restricted nouns. This paper discuses the limitations of word-sense disambiguation(WSD) that uses subcategorization information suggested by the Sejong Electronic Dictionary and generalized selection-restricted nouns from the Korean Lexico-semantic network. An alternative method that utilized semi-supervised learning, the chi-square test and some other means to make WSD decisions is presented herein.

Efficient Part-of-Speech Set for Knowledge-based Word Sense Disambiguation of Korean Nouns (한국어 명사의 지식기반 의미중의성 해소를 위한 효과적인 품사집합)

  • Kwak, Chul-Heon;Seo, Young-Hoon;Lee, Chung-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.4
    • /
    • pp.418-425
    • /
    • 2016
  • This paper presents the part-of-speech set which is highly efficient at knowledge-based word sense disambiguation for Korean nouns. 174,000 sentences extracted for test set from Sejong semantic tagged corpus whose sense is based on Standard korean dictionary. We disambiguate selected nouns in test set using glosses and examples in Standard Korean dictionary. 15 part-of-speeches which give the best performance for all test set and 17 part-of-speeches which give the best performance for accuracy average of selected nouns are selected. We obtain 12% more performance by those part-of-speech sets than by full 45 part-of-speech set.