• Title/Summary/Keyword: weight of word

Search Result 128, Processing Time 0.036 seconds

Ordering a Left-branching Language: Heaviness vs. Givenness

  • Choi, Hye-Won
    • Language and Information
    • /
    • v.13 no.1
    • /
    • pp.39-56
    • /
    • 2009
  • This paper investigates ordering alternation phenomena in Korean using the dative construction data from Sejong Corpus of Modern Korean (Kim, 2000). The paper first shows that syntactic weight and information structure are distinct and independent factors that influence word order in Korean. Moreover, it reveals that heaviness and givenness compete each other and exert diverging effects on word order, which contrasts the converging effects of these factors shown in word orders of right-branching languages like English. The typological variation of syntactic weight effect poses interesting theoretical and empirical questions, which are discussed in relation to processing efficiency in ordering.

  • PDF

Latent Semantic Analysis Approach for Document Summarization Based on Word Embeddings

  • Al-Sabahi, Kamal;Zuping, Zhang;Kang, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.1
    • /
    • pp.254-276
    • /
    • 2019
  • Since the amount of information on the internet is growing rapidly, it is not easy for a user to find relevant information for his/her query. To tackle this issue, the researchers are paying much attention to Document Summarization. The key point in any successful document summarizer is a good document representation. The traditional approaches based on word overlapping mostly fail to produce that kind of representation. Word embedding has shown good performance allowing words to match on a semantic level. Naively concatenating word embeddings makes common words dominant which in turn diminish the representation quality. In this paper, we employ word embeddings to improve the weighting schemes for calculating the Latent Semantic Analysis input matrix. Two embedding-based weighting schemes are proposed and then combined to calculate the values of this matrix. They are modified versions of the augment weight and the entropy frequency that combine the strength of traditional weighting schemes and word embedding. The proposed approach is evaluated on three English datasets, DUC 2002, DUC 2004 and Multilingual 2015 Single-document Summarization. Experimental results on the three datasets show that the proposed model achieved competitive performance compared to the state-of-the-art leading to a conclusion that it provides a better document representation and a better document summary as a result.

Optimally Weighted Cepstral Distance Measure for Speech Recognition (음성 인식을 위한 최적 가중 켑스트랄 거리 측정 방법)

  • 김원구
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06c
    • /
    • pp.133-137
    • /
    • 1994
  • In this paper, a method for designing an optimal weight function for the weighted cepstral distance measure is proposed. A conventional weight function or cepstral lifter is obtained eperimentally depending on the spectral components to be emphasized. The proposed method minimizes the error between word reference patterns and the traning data. To compare the proposed optimal weight function with conventional function, speech recognition systems based on Dpynamic Time Warping and Hidden Markov Models were constructed to conduct speaker independent isolated word necogination eperiment. Results show that the proposed method gives better performance than conventional weight functions.

  • PDF

Semantic Similarity Measures Between Words within a Document using WordNet (워드넷을 이용한 문서내에서 단어 사이의 의미적 유사도 측정)

  • Kang, SeokHoon;Park, JongMin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.11
    • /
    • pp.7718-7728
    • /
    • 2015
  • Semantic similarity between words can be applied in many fields including computational linguistics, artificial intelligence, and information retrieval. In this paper, we present weighted method for measuring a semantic similarity between words in a document. This method uses edge distance and depth of WordNet. The method calculates a semantic similarity between words on the basis of document information. Document information uses word term frequencies(TF) and word concept frequencies(CF). Each word weight value is calculated by TF and CF in the document. The method includes the edge distance between words, the depth of subsumer, and the word weight in the document. We compared out scheme with the other method by experiments. As the result, the proposed method outperforms other similarity measures. In the document, the word weight value is calculated by the proposed method. Other methods which based simple shortest distance or depth had difficult to represent the information or merge informations. This paper considered shortest distance, depth and information of words in the document, and also improved the performance.

Korean Document Classification Using Extended Vector Space Model (확장된 벡터 공간 모델을 이용한 한국어 문서 분류 방안)

  • Lee, Samuel Sang-Kon
    • The KIPS Transactions:PartB
    • /
    • v.18B no.2
    • /
    • pp.93-108
    • /
    • 2011
  • We propose a extended vector space model by using ambiguous words and disambiguous words to improve the result of a Korean document classification method. In this paper we study the precision enhancement of vector space model and we propose a new axis that represents a weight value. Conventional classification methods without the weight value had some problems in vector comparison. We define a word which has same axis of the weight value as ambiguous word after calculating a mutual information value between a term and its classification field. We define a word which is disambiguous with ambiguous meaning as disambiguous word. We decide the strengthness of a disambiguous word among several words which is occurring ambiguous word and a same document. Finally, we proposed a new classification method based on extension of vector dimension with ambiguous and disambiguous words.

Word Verification using Similar Word Information and State-Weights of HMM using Genetic Algorithmin (유사단어 정보와 유전자 알고리듬을 이용한 HMM의 상태하중값을 사용한 단어의 검증)

  • Kim, Gwang-Tae;Baek, Chang-Heum;Hong, Jae-Geun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.1
    • /
    • pp.97-103
    • /
    • 2001
  • Hidden Markov Model (HMM) is the most widely used method in speech recognition. In general, HMM parameters are trained to have maximum likelihood (ML) for training data. Although the ML method has good performance, it dose not take account into discrimination to other words. To complement this problem, a word verification method by re-recognition of the recognized word and its similar word using the discriminative function of the two words. To find the similar word, the probability of other words to the HMM is calculated and the word showing the highest probability is selected as the similar word of the mode. To achieve discrimination to each word the weight to each state is appended to the HMM parameter. The weight is calculated by genetic algorithm. The verificator complemented discrimination of each word and reduced the error occurred by similar word. As a result of verification the total error is reduced by about 22%

  • PDF

Ranking Translation Word Selection Using a Bilingual Dictionary and WordNet

  • Kim, Kweon-Yang;Park, Se-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.1
    • /
    • pp.124-129
    • /
    • 2006
  • This parer presents a method of ranking translation word selection for Korean verbs based on lexical knowledge contained in a bilingual Korean-English dictionary and WordNet that are easily obtainable knowledge resources. We focus on deciding which translation of the target word is the most appropriate using the measure of semantic relatedness through the 45 extended relations between possible translations of target word and some indicative clue words that play a role of predicate-arguments in source language text. In order to reduce the weight of application of possibly unwanted senses, we rank the possible word senses for each translation word by measuring semantic similarity between the translation word and its near synonyms. We report an average accuracy of $51\%$ with ten Korean ambiguous verbs. The evaluation suggests that our approach outperforms the default baseline performance and previous works.

A Method for Measuring Similarity Measure of Thesaurus Transformation Documents using DBSCAN (DBSCAN을 활용한 유의어 변환 문서 유사도 측정 방법)

  • Kim, Byeongsik;Shin, Juhyun
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.9
    • /
    • pp.1035-1043
    • /
    • 2018
  • There is a case where the core content of another person's work is decorated as though it is his own thoughts by changing own thoughts without showing the source. Plagiarism test of copykiller free service used in plagiarism check is performed by comparing plagiarism more than 6th word. However, it is not enough to judge it as a plagiarism with a six - word match if it is replaced with a similar word. Therefore, in this paper, we construct word clusters by using DBSCAN algorithm, find synonyms, convert the words in the clusters into representative synonyms, and construct L-R tables through L-R parsing. We then propose a method for determining the similarity of documents by applying weights to the thesaurus and weights for each paragraph of the thesis.

Paragraph Retrieval Model for Machine Reading Comprehension using IN-OUT Vector of Word2Vec (Word2Vec의 IN-OUT Vector를 이용한 기계독해용 단락 검색 모델)

  • Kim, Sihyung;Park, Seongsik;Kim, Harksoo
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.326-329
    • /
    • 2019
  • 기계독해를 실용화하기 위해 단락을 검색하는 검색 모델은 최근 기계독해 모델이 우수한 성능을 보임에 따라 그 필요성이 더 부각되고 있다. 그러나 기존 검색 모델은 질의와 단락의 어휘 일치도나 유사도만을 계산하므로, 기계독해에 필요한 질의 어휘의 문맥에 해당하는 단락 검색을 하지 못하는 문제가 있다. 본 논문에서는 이러한 문제를 해결하기 위해 Word2vec의 입력 단어열의 벡터에 해당하는 IN Weight Matrix와 출력 단어열의 벡터에 해당하는 OUT Weight Matrix를 사용한 단락 검색 모델을 제안한다. 제안 방법은 기존 검색 모델에 비해 정확도를 측정하는 Precision@k에서 좋은 성능을 보였다.

  • PDF

An Index System using Restrictive Distance (거리 제한을 이용한 색인 시스템)

  • Park, Chan-Ee;Kim, Sang-Bok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.1 s.39
    • /
    • pp.273-282
    • /
    • 2006
  • In this paper, we propose index method introducing distance concept in word by a method weighting word. This index method is frequent representing an inquiry word and document index and compound noun or more than two adjoin nouns or noun phrase, the farther the distance between these nouns, the fewer selected ratio decreases in index point is the aiming, this choose guide word candidate by existent weight grant method and distance between candidates chose candidate finally in index within 3 sentences. Using in these way I document of 100 kinds of newspaper, scientific treatise, web document and so on, showed the correctness rate resulted of newspaper 92.03% scientific treatise 95% web document 73.33%.

  • PDF