• Title/Summary/Keyword: target word selection

Search Result 18, Processing Time 0.029 seconds

Ranking Translation Word Selection Using a Bilingual Dictionary and WordNet

  • Kim, Kweon-Yang;Park, Se-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.1
    • /
    • pp.124-129
    • /
    • 2006
  • This parer presents a method of ranking translation word selection for Korean verbs based on lexical knowledge contained in a bilingual Korean-English dictionary and WordNet that are easily obtainable knowledge resources. We focus on deciding which translation of the target word is the most appropriate using the measure of semantic relatedness through the 45 extended relations between possible translations of target word and some indicative clue words that play a role of predicate-arguments in source language text. In order to reduce the weight of application of possibly unwanted senses, we rank the possible word senses for each translation word by measuring semantic similarity between the translation word and its near synonyms. We report an average accuracy of $51\%$ with ten Korean ambiguous verbs. The evaluation suggests that our approach outperforms the default baseline performance and previous works.

Target Word Selection for English-Korean Machine Translation System using Multiple Knowledge (다양한 지식을 사용한 영한 기계번역에서의 대역어 선택)

  • Lee, Ki-Young;Kim, Han-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.5 s.43
    • /
    • pp.75-86
    • /
    • 2006
  • Target word selection is one of the most important and difficult tasks in English-Korean Machine Translation. It effects on the translation accuracy of machine translation systems. In this paper, we present a new approach to select Korean target word for an English noun with translation ambiguities using multiple knowledge such as verb frame patterns, sense vectors based on collocations, statistical Korean local context information and co-occurring POS information. Verb frame patterns constructed with dictionary and corpus play an important role in resolving the sparseness problem of collocation data. Sense vectors are a set of collocation data when an English word having target selection ambiguities is to be translated to specific Korean target word. Statistical Korean local context Information is an N-gram information generated using Korean corpus. The co-occurring POS information is a statistically significant POS clue which appears with ambiguous word. The experiment showed promising results for diverse sentences from web documents.

  • PDF

Practical Target Word Selection Using Collocation in English to Korean Machine Translation (영한번역 시스템에서 연어 사용에 의한 실용적인 대역어 선택)

  • 김성묵
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.5 no.2
    • /
    • pp.56-61
    • /
    • 2000
  • The quality of English to Korean Machine Translation depends on how well it deals with target word selection of verbs containing enormous ambiguity. Verb sense disambiguation can be done by using collocation, but the construction of verb collocations costs a lot of efforts and expenses. So, existing methods should be examined in the practical view points. This paper describes the practical method of target word selection using existing collocation and semantic distance computed from minimum semantic features of nouns.

  • PDF

Translation Disambiguation Based on 'Word-to-Sense and Sense-to-Word' Relationship (`단어-의미 의미-단어` 관계에 기반한 번역어 선택)

  • Lee Hyun-Ah
    • The KIPS Transactions:PartB
    • /
    • v.13B no.1 s.104
    • /
    • pp.71-76
    • /
    • 2006
  • To obtain a correctly translated sentence in a machine translation system, we must select target words that not only reflect an appropriate meaning in a source sentence but also make a fluent sentence in a target language. This paper points out that a source language word has various senses and each sense can be mapped into multiple target words, and proposes a new translation disambiguation method based on this 'word-to-sense and sense-to-word' relationship. In my method target words are chosen through disambiguation of a source word sense and selection of a target word. Most of translation disambiguation methods are based on a 'word-to-word' relationship that means they translate a source word directly into a target wort so they require complicate knowledge sources that directly link a source words to target words, which are hard to obtain like bilingual aligned corpora. By combining two sub-problems for each language, knowledge for translation disambiguation can be automatically extracted from knowledge sources for each language that are easy to obtain. In addition, disambiguation results satisfy both fidelity and intelligibility because selected target words have correct meaning and generate naturally composed target sentences.

Target Word Selection Disambiguation using Untagged Text Data in English-Korean Machine Translation (영한 기계 번역에서 미가공 텍스트 데이터를 이용한 대역어 선택 중의성 해소)

  • Kim Yu-Seop;Chang Jeong-Ho
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.749-758
    • /
    • 2004
  • In this paper, we propose a new method utilizing only raw corpus without additional human effort for disambiguation of target word selection in English-Korean machine translation. We use two data-driven techniques; one is the Latent Semantic Analysis(LSA) and the other the Probabilistic Latent Semantic Analysis(PLSA). These two techniques can represent complex semantic structures in given contexts like text passages. We construct linguistic semantic knowledge by using the two techniques and use the knowledge for target word selection in English-Korean machine translation. For target word selection, we utilize a grammatical relationship stored in a dictionary. We use k- nearest neighbor learning algorithm for the resolution of data sparseness Problem in target word selection and estimate the distance between instances based on these models. In experiments, we use TREC data of AP news for construction of latent semantic space and Wail Street Journal corpus for evaluation of target word selection. Through the Latent Semantic Analysis methods, the accuracy of target word selection has improved over 10% and PLSA has showed better accuracy than LSA method. finally we have showed the relatedness between the accuracy and two important factors ; one is dimensionality of latent space and k value of k-NT learning by using correlation calculation.

The Locus of the Word Frequency Effect in Speech Production: Evidence from the Picture-word Interference Task (말소리 산출에서 단어빈도효과의 위치 : 그림-단어간섭과제에서 나온 증거)

  • Koo, Min-Mo;Nam, Ki-Chun
    • MALSORI
    • /
    • no.62
    • /
    • pp.51-68
    • /
    • 2007
  • Two experiments were conducted to determine the exact locus of the frequency effect in speech production. Experiment 1 addressed the question as to whether the word frequency effect arise from the stage of lemma selection. A picture-word interference task was performed to test the significance of interactions between the effects of target frequency, distractor frequency and semantic relatedness. There was a significant interaction between the distractor frequency and the semantic relatedness and between the target and the distractor frequency. Experiment 2 examined whether the word frequency effect is attributed to the lexeme level which represent phonological information of words. A methodological logic applied to Experiment 2 was the same as that of Experiment 1. There was no significant interaction between the distractor frequency and the phonological relatedness. These results demonstrate that word frequency has influence on the processes involved in selecting a correct lemma corresponding to an activated lexical concept in speech production.

  • PDF

Feature selection for text data via topic modeling (토픽 모형을 이용한 텍스트 데이터의 단어 선택)

  • Woosol, Jang;Ye Eun, Kim;Won, Son
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.6
    • /
    • pp.739-754
    • /
    • 2022
  • Usually, text data consists of many variables, and some of them are closely correlated. Such multi-collinearity often results in inefficient or inaccurate statistical analysis. For supervised learning, one can select features by examining the relationship between target variables and explanatory variables. On the other hand, for unsupervised learning, since target variables are absent, one cannot use such a feature selection procedure as in supervised learning. In this study, we propose a word selection procedure that employs topic models to find latent topics. We substitute topics for the target variables and select terms which show high relevance for each topic. Applying the procedure to real data, we found that the proposed word selection procedure can give clear topic interpretation by removing high-frequency words prevalent in various topics. In addition, we observed that, by applying the selected variables to the classifiers such as naïve Bayes classifiers and support vector machines, the proposed feature selection procedure gives results comparable to those obtained by using class label information.

Exclusion of Non-similar Candidates using Positional Accuracy based on Levenstein Distance from N-best Recognition Results of Isolated Word Recognition (레벤스타인 거리에 기초한 위치 정확도를 이용한 고립 단어 인식 결과의 비유사 후보 단어 제외)

  • Yun, Young-Sun;Kang, Jeom-Ja
    • Phonetics and Speech Sciences
    • /
    • v.1 no.3
    • /
    • pp.109-115
    • /
    • 2009
  • Many isolated word recognition systems may generate non-similar words for recognition candidates because they use only acoustic information. In this paper, we investigate several techniques which can exclude non-similar words from N-best candidate words by applying Levenstein distance measure. At first, word distance method based on phone and syllable distances are considered. These methods use just Levenstein distance on phones or double Levenstein distance algorithm on syllables of candidates. Next, word similarity approaches are presented that they use characters' position information of word candidates. Each character's position is labeled to inserted, deleted, and correct position after alignment between source and target string. The word similarities are obtained from characters' positional probabilities which mean the frequency ratio of the same characters' observations on the position. From experimental results, we can find that the proposed methods are effective for removing non-similar words without loss of system performance from the N-best recognition candidates of the systems.

  • PDF

The Locus of the Word Frequency Effect in Speech Production (말소리 산출에서 단어빈도효과의 위치)

  • Koo, Min-Mo;Nam, Ki-Chun
    • Proceedings of the KSPS conference
    • /
    • 2006.11a
    • /
    • pp.99-108
    • /
    • 2006
  • Three experiments were conducted to determine the exact locus of the frequency effect in speech production. In Experiment 1. a picture naming task was used to replicate whether the word frequency effect is due to the processes involved in lexical access or not. The robust word frequency effect of 31ms was obtained. The question to be addressed in Experiment 2 is whether the word frequency effect is originated from the level where a lemma is selected. To the end, using a picture-word interference task, the significance of interactions between the effects of target frequency, distractor frequency and semantic relatedness were tested. Interaction between the distractor frequency and semantic relatedness variables was significant. And interaction between the target and distractor frequency variables showed a significant tendency. In addition, the results of Experiment 2 suggest that the mechanism underlying the word frequency effect is encoded as different resting activation level of lemmas. Experiment 3 explored whether the word frequency effect is attributed to the lexeme level where phonological information of words is represented or not. A methodological logic applied to Experiment 3 was the same as to Experiment 2. Any interaction was not significant. In conclusion, the present study obtained the evidence supporting two assumptions: (a) the locus of the word frequency effect exists in the processes involved in lemma selection, (b) the mechanism for the word frequency effect is encoded as different resting activation level of lemmas. In order to explain the word frequency effect obtained in this study, the core assumptions of current production models need to be modified.

  • PDF