• Title/Summary/Keyword: similarity calculation

Search Result 208, Processing Time 0.02 seconds

SSF: Sentence Similar Function Based on word2vector Similar Elements

  • Yuan, Xinpan;Wang, Songlin;Wan, Lanjun;Zhang, Chengyuan
    • Journal of Information Processing Systems
    • /
    • v.15 no.6
    • /
    • pp.1503-1516
    • /
    • 2019
  • In this paper, to improve the accuracy of long sentence similarity calculation, we proposed a sentence similarity calculation method based on a system similarity function. The algorithm uses word2vector as the system elements to calculate the sentence similarity. The higher accuracy of our algorithm is derived from two characteristics: one is the negative effect of penalty item, and the other is that sentence similar function (SSF) based on word2vector similar elements doesn't satisfy the exchange rule. In later studies, we found the time complexity of our algorithm depends on the process of calculating similar elements, so we build an index of potentially similar elements when training the word vector process. Finally, the experimental results show that our algorithm has higher accuracy than the word mover's distance (WMD), and has the least query time of three calculation methods of SSF.

Research on Keyword-Overlap Similarity Algorithm Optimization in Short English Text Based on Lexical Chunk Theory

  • Na Li;Cheng Li;Honglie Zhang
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.631-640
    • /
    • 2023
  • Short-text similarity calculation is one of the hot issues in natural language processing research. The conventional keyword-overlap similarity algorithms merely consider the lexical item information and neglect the effect of the word order. And some of its optimized algorithms combine the word order, but the weights are hard to be determined. In the paper, viewing the keyword-overlap similarity algorithm, the short English text similarity algorithm based on lexical chunk theory (LC-SETSA) is proposed, which introduces the lexical chunk theory existing in cognitive psychology category into the short English text similarity calculation for the first time. The lexical chunks are applied to segment short English texts, and the segmentation results demonstrate the semantic connotation and the fixed word order of the lexical chunks, and then the overlap similarity of the lexical chunks is calculated accordingly. Finally, the comparative experiments are carried out, and the experimental results prove that the proposed algorithm of the paper is feasible, stable, and effective to a large extent.

A Study on the Relationship between Weighted Value and Qualitative Standard in Substantial Similarity (실질적 유사성 판단을 위한 가중치 활용과 질적 분석의 관계)

  • Kim, Si-Yeol
    • Journal of Software Assessment and Valuation
    • /
    • v.15 no.1
    • /
    • pp.25-35
    • /
    • 2019
  • In Korea, the calculation of quantitative similarity is commonly used to gauge the substantial similarity of computer programs. Substantial similarity should be assessed by considering the quantity and quality of areas that show similarity, but in practice, qualitative aspects are reflected by multiplying the weighted value in the calculation of quantitative similarity. However, such a practical method cannot be deemed adequate, considering the fundamental characteristic of the judgment on substantial similarity, which holds that the quantitative and qualitative aspects of similar areas should be considered on an equal footing. Thus, this study pointed out the issue regarding the use of weighted value and sought appropriate ways to take into account qualitative aspects when assessing the substantial similarity of computer programs.

Improving the Performance of Document Similarity by using GPU Parallelism (GPU 병렬성을 이용한 문서 유사도 계산 성능 개선)

  • Park, Il-Nam;Bae, Byung-Gurl;Im, Eun-Jin;Kang, Seung-Shik
    • The KIPS Transactions:PartB
    • /
    • v.19B no.4
    • /
    • pp.243-248
    • /
    • 2012
  • In the information retrieval systems like vector model implementation and document clustering, document similarity calculation takes a great part on the overall performance of the system. In this paper, GPU parallelism has been explored to enhance the processing speed of document similarity calculation in a CUDA framework. The proposed method increased the similarity calculation speed almost 15 times better compared to the typical CPU-based framework. It is 5.2 and 3.4 times better than the methods by using CUBLAS and Thrust, respectively.

An Incremental Similarity Computation Method in Agglomerative Hierarchical Clustering

  • Jung, Sung-young;Kim, Taek-soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.7
    • /
    • pp.579-583
    • /
    • 2001
  • In the area of data clustering in high dimensional space, one of the difficulties is the time-consuming process for computing vector similarities. It becomes worse in the case of the agglomerative algorithm with the group-average link and mean centroid method, because the cluster similarity must be recomputed whenever the cluster center moves after the merging step. As a solution of this problem, we present an incremental method of similarity computation, which substitutes the scalar calculation for the time-consuming calculation of vector similarity with several measures such as the squared distance, inner product, cosine, and minimum variance. Experimental results show that it makes clustering speed significantly fast for very high dimensional data.

  • PDF

Collaborative Filtering Algorithm Based on User-Item Attribute Preference

  • Ji, JiaQi;Chung, Yeongjee
    • Journal of information and communication convergence engineering
    • /
    • v.17 no.2
    • /
    • pp.135-141
    • /
    • 2019
  • Collaborative filtering algorithms often encounter data sparsity issues. To overcome this issue, auxiliary information of relevant items is analyzed and an item attribute matrix is derived. In this study, we combine the user-item attribute preference with the traditional similarity calculation method to develop an improved similarity calculation approach and use weights to control the importance of these two elements. A collaborative filtering algorithm based on user-item attribute preference is proposed. The experimental results show that the performance of the recommender system is the most optimal when the weight of traditional similarity is equal to that of user-item attribute preference similarity. Although the rating-matrix is sparse, better recommendation results can be obtained by adding a suitable proportion of user-item attribute preference similarity. Moreover, the mean absolute error of the proposed approach is less than that of two traditional collaborative filtering algorithms.

Grouping DNA sequences with similarity measure and application

  • Lee, Sanghyuk
    • Journal of the Korea Convergence Society
    • /
    • v.4 no.3
    • /
    • pp.35-41
    • /
    • 2013
  • Grouping problem with similarities between DNA sequences are studied. The similaritymeasure and the distance measure showed the complementary characteristics. Distance measure can be obtained by complementing similarity measure, and vice versa. Similarity measure is derived and proved. Usefulness of the proposed similarity measure is applied to grouping problem of 25 cockroach DNA sequences. By calculation of DNA similarity, 25 cockroaches are clustered by four groups, and the results are compared with the previous neighbor-joining method.

A Study on Influence of Stroke Element Properties to find Hangul Typeface Similarity (한글 글꼴 유사성 판단을 위한 획 요소 속성의 영향력 분석)

  • Park, Dong-Yeon;Jeon, Ja-Yeon;Lim, Seo-Young;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.12
    • /
    • pp.1552-1564
    • /
    • 2020
  • As various styles of fonts were used, there were problems such as output errors due to uninstalled fonts and difficulty in font recognition. To solve these problems, research on font recognition and recommendation were actively conducted. However, Hangul font research remains at the basic level. Therefore, in order to automate the comparison on Hangul font similarity in the future, we analyze the influence of each stroke element property. First, we select seven representative properties based on Hangul stroke shape elements. Second, we design a calculation model to compare similarity between fonts. Third, we analyze the effect of each stroke element through the cosine similarity between the user's evaluation and the results of the model. As a result, there was no significant difference in the individual effect of each representative property. Also, the more accurate similarity comparison was possible when many representative properties were used.

Research on the Classification Model of Similarity Malware using Fuzzy Hash (퍼지해시를 이용한 유사 악성코드 분류모델에 관한 연구)

  • Park, Changwook;Chung, Hyunji;Seo, Kwangseok;Lee, Sangjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.22 no.6
    • /
    • pp.1325-1336
    • /
    • 2012
  • In the past about 10 different kinds of malicious code were found in one day on the average. However, the number of malicious codes that are found has rapidly increased reachingover 55,000 during the last 10 year. A large number of malicious codes, however, are not new kinds of malicious codes but most of them are new variants of the existing malicious codes as same functions are newly added into the existing malicious codes, or the existing malicious codes are modified to evade anti-virus detection. To deal with a lot of malicious codes including new malicious codes and variants of the existing malicious codes, we need to compare the malicious codes in the past and the similarity and classify the new malicious codes and the variants of the existing malicious codes. A former calculation method of the similarity on the existing malicious codes compare external factors of IPs, URLs, API, Strings, etc or source code levels. The former calculation method of the similarity takes time due to the number of malicious codes and comparable factors on the increase, and it leads to employing fuzzy hashing to reduce the amount of calculation. The existing fuzzy hashing, however, has some limitations, and it causes come problems to the former calculation of the similarity. Therefore, this research paper has suggested a new comparison method for malicious codes to improve performance of the calculation of the similarity using fuzzy hashing and also a classification method employing the new comparison method.

A Study on the CBR Pattern using Similarity and the Euclidean Calculation Pattern (유사도와 유클리디안 계산패턴을 이용한 CBR 패턴연구)

  • Yun, Jong-Chan;Kim, Hak-Chul;Kim, Jong-Jin;Youn, Sung-Dae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.4
    • /
    • pp.875-885
    • /
    • 2010
  • CBR (Case-Based Reasoning) is a technique to infer the relationships between existing data and case data, and the method to calculate similarity and Euclidean distance is mostly frequently being used. However, since those methods compare all the existing and case data, it also has a demerit that it takes much time for data search and filtering. Therefore, to solve this problem, various researches have been conducted. This paper suggests the method of SE(Speed Euclidean-distance) calculation that utilizes the patterns discovered in the existing process of computing similarity and Euclidean distance. Because SE calculation applies the patterns and weight found during inputting new cases and enables fast data extraction and short operation time, it can enhance computing speed for temporal or spatial restrictions and eliminate unnecessary computing operation. Through this experiment, it has been found that the proposed method improves performance in various computer environments or processing rate more efficiently than the existing method that extracts data using similarity or Euclidean method does.