• Title/Summary/Keyword: Node Similarity

Search Result 83, Processing Time 0.025 seconds

A Korean Text Summarization System Using Aggregate Similarity (도합유사도를 이용한 한국어 문서요약 시스템)

  • 김재훈;김준홍
    • Korean Journal of Cognitive Science
    • /
    • v.12 no.1_2
    • /
    • pp.35-42
    • /
    • 2001
  • In this paper. a document is represented as a weighted graph called a text relationship map. In the graph. a node represents a vector of nouns in a sentence, an edge completely connects other nodes. and a weight on the edge is a value of the similarity between two nodes. The similarity is based on the word overlap between the corresponding nodes. The importance of a node. called an aggregate similarity in this paper. is defined as the sum of weights on the links connecting it to other nodes on the map. In this paper. we present a Korean text summarization system using the aggregate similarity. To evaluate our system, we used two test collection, one collection (PAPER-InCon) consists of 100 papers in the field of computer science: the other collection (NEWS) is composed of 105 articles in the newspapers and had built by KOROlC. Under the compression rate of 20%. we achieved the recall of 46.6% (PAPER-InCon) and 30.5% (NEWS) and the precision of 76.9% (PAPER-InCon) and 42.3% (NEWS).

  • PDF

Interpretability Comparison of Popular Decision Tree Algorithms (대표적인 의사결정나무 알고리즘의 해석력 비교)

  • Hong, Jung-Sik;Hwang, Geun-Seong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.2
    • /
    • pp.15-23
    • /
    • 2021
  • Most of the open-source decision tree algorithms are based on three splitting criteria (Entropy, Gini Index, and Gain Ratio). Therefore, the advantages and disadvantages of these three popular algorithms need to be studied more thoroughly. Comparisons of the three algorithms were mainly performed with respect to the predictive performance. In this work, we conducted a comparative experiment on the splitting criteria of three decision trees, focusing on their interpretability. Depth, homogeneity, coverage, lift, and stability were used as indicators for measuring interpretability. To measure the stability of decision trees, we present a measure of the stability of the root node and the stability of the dominating rules based on a measure of the similarity of trees. Based on 10 data collected from UCI and Kaggle, we compare the interpretability of DT (Decision Tree) algorithms based on three splitting criteria. The results show that the GR (Gain Ratio) branch-based DT algorithm performs well in terms of lift and homogeneity, while the GINI (Gini Index) and ENT (Entropy) branch-based DT algorithms performs well in terms of coverage. With respect to stability, considering both the similarity of the dominating rule or the similarity of the root node, the DT algorithm according to the ENT splitting criterion shows the best results.

A Graph Embedding Technique for Weighted Graphs Based on LSTM Autoencoders

  • Seo, Minji;Lee, Ki Yong
    • Journal of Information Processing Systems
    • /
    • v.16 no.6
    • /
    • pp.1407-1423
    • /
    • 2020
  • A graph is a data structure consisting of nodes and edges between these nodes. Graph embedding is to generate a low dimensional vector for a given graph that best represents the characteristics of the graph. Recently, there have been studies on graph embedding, especially using deep learning techniques. However, until now, most deep learning-based graph embedding techniques have focused on unweighted graphs. Therefore, in this paper, we propose a graph embedding technique for weighted graphs based on long short-term memory (LSTM) autoencoders. Given weighted graphs, we traverse each graph to extract node-weight sequences from the graph. Each node-weight sequence represents a path in the graph consisting of nodes and the weights between these nodes. We then train an LSTM autoencoder on the extracted node-weight sequences and encode each nodeweight sequence into a fixed-length vector using the trained LSTM autoencoder. Finally, for each graph, we collect the encoding vectors obtained from the graph and combine them to generate the final embedding vector for the graph. These embedding vectors can be used to classify weighted graphs or to search for similar weighted graphs. The experiments on synthetic and real datasets show that the proposed method is effective in measuring the similarity between weighted graphs.

A Method of Highspeed Similarity Retrieval based on Self-Organizing Maps (자기 조직화 맵 기반 유사화상 검색의 고속화 수법)

  • Oh, Kun-Seok;Yang, Sung-Ki;Bae, Sang-Hyun;Kim, Pan-Koo
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.515-522
    • /
    • 2001
  • Feature-based similarity retrieval become an important research issue in image database systems. The features of image data are useful to discrimination of images. In this paper, we propose the highspeed k-Nearest Neighbor search algorithm based on Self-Organizing Maps. Self-Organizing Map(SOM) provides a mapping from high dimensional feature vectors onto a two-dimensional space. A topological feature map preserves the mutual relations (similarity) in feature spaces of input data, and clusters mutually similar feature vectors in a neighboring nodes. Each node of the topological feature map holds a node vector and similar images that is closest to each node vector. We implemented about k-NN search for similar image classification as to (1) access to topological feature map, and (2) apply to pruning strategy of high speed search. We experiment on the performance of our algorithm using color feature vectors extracted from images. Promising results have been obtained in experiments.

  • PDF

Follower classification system based on the similarity of Twitter node information (트위터 사용자정보의 유사성을 기반으로 한 팔로어 분류시스템)

  • Kye, Yong-Sun;Yoon, Youngmi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.1
    • /
    • pp.111-118
    • /
    • 2014
  • Current friend recommendation system on Twitter primarily recommends the most influential twitter. However, this way of recommendation has drawbacks where it does not recommend the users of which attributes of interests are similar to theirs. Since users want other users of which attributes are similar, this study implements follower recommendation system based on the similarity of twitter node informations. The data in this study is from SNAP(Stanford Network Analysis Platform), and it consists of twitter node information of which number of followers is over 10,000 and twitter link information. We used the SNAP data as a training data, and generated a classifier which recommends and predicts the relation between followers. We evaluated the classifier by 10-Fold Cross validation. Once two twitter node informations are given, our model can recommend the relationship of the two twitters as one of following such as: FoFo(Follower Follower), FoFr(Follower Friend), NC(Not Connected).

Analysis of Memory Pool Jacquard Similarity between Bitcoin and Ethereum in the Same Environment (동일한 환경에서 구성된 비트코인과 이더리움의 메모리 풀 자카드 유사도 분석)

  • Maeng, SooHoon;Shin, Hye-yeong;Kim, Daeyong;Ju, Hongtaek
    • KNOM Review
    • /
    • v.22 no.3
    • /
    • pp.20-24
    • /
    • 2019
  • Blockchain is a distributed ledger-based technology where all nodes participating in the blockchain network are connected to the P2P network. When a transaction is created in the blockchain network, the transaction is propagated and validated by the blockchain nodes. The verified transaction is sent to peers connected to each node through P2P network, and the peers keep the transaction in the memory pool. Due to the nature of P2P networks, the number and type of transactions delivered by a blockchain node is different for each node. As a result, all nodes do not have the same memory pool. Research is needed to solve problems such as attack detection. In this paper, we analyze transactions in the memory pool before solving problems such as transaction fee manipulation, double payment problem, and DDos attack detection. Therefore, this study collects transactions stored in each node memory pool of Bitcoin and Ethereum, a cryptocurrency system based on blockchain technology, and analyzes how much common transactions they have using jacquard similarity.

Performance Analysis of Forwarding Schemes Based on Similarities for Opportunistic Networks (기회적 네트워크에서의 유사도 기반의 포워딩 기법의 성능 분석)

  • Kim, Sun-Kyum;Lee, Tae-Seok;Kim, Wan-Jong
    • KIISE Transactions on Computing Practices
    • /
    • v.24 no.3
    • /
    • pp.145-150
    • /
    • 2018
  • Forwarding in opportunistic networks shows low performance because there may be no connecting paths between the source and the destination nodes due to the intermittent connectivity. Currently, social network analysis has been researched. Specifically, similarity is one of methods of social networks analysis. In this paper, we propose forwarding schemes based on representative similarities, and evaluate how much the forwarding performance increases. As a result, since the forwarding schemes are based on similarities, these schemes only forward messages to nodes with higher similarity as relay nodes, toward the destination node. These schemes have low network traffic and hop count while having stable transmission delay.

Cross-architecture Binary Function Similarity Detection based on Composite Feature Model

  • Xiaonan Li;Guimin Zhang;Qingbao Li;Ping Zhang;Zhifeng Chen;Jinjin Liu;Shudan Yue
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2101-2123
    • /
    • 2023
  • Recent studies have shown that the neural network-based binary code similarity detection technology performs well in vulnerability mining, plagiarism detection, and malicious code analysis. However, existing cross-architecture methods still suffer from insufficient feature characterization and low discrimination accuracy. To address these issues, this paper proposes a cross-architecture binary function similarity detection method based on composite feature model (SDCFM). Firstly, the binary function is converted into vector representation according to the proposed composite feature model, which is composed of instruction statistical features, control flow graph structural features, and application program interface calling behavioral features. Then, the composite features are embedded by the proposed hierarchical embedding network based on a graph neural network. In which, the block-level features and the function-level features are processed separately and finally fused into the embedding. In addition, to make the trained model more accurate and stable, our method utilizes the embeddings of predecessor nodes to modify the node embedding in the iterative updating process of the graph neural network. To assess the effectiveness of composite feature model, we contrast SDCFM with the state of art method on benchmark datasets. The experimental results show that SDCFM has good performance both on the area under the curve in the binary function similarity detection task and the vulnerable candidate function ranking in vulnerability search task.

Method to Construct Feature Functions of C-CRF Using Regression Tree Analysis (회귀나무 분석을 이용한 C-CRF의 특징함수 구성 방법)

  • Ahn, Gil Seung;Hur, Sun
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.41 no.4
    • /
    • pp.338-343
    • /
    • 2015
  • We suggest a method to configure feature functions of continuous conditional random field (C-CRF). Regression tree and similarity analysis are introduced to construct the first and second feature functions of C-CRF, respectively. Rules from the regression tree are transformed to logic functions. If a logic in the set of rules is true for a data then it returns the corresponding value of leaf node and zero, otherwise. We build an Euclidean similarity matrix to define neighborhood, which constitute the second feature function. Using two feature functions, we make a C-CRF model and an illustrate example is provided.

The Study of Automatic Hypertext Generation using the Syntactic and Semantic Similarity (구문적 유사도와 의미적 유사도를 이용한 하이퍼텍스트 자동생성에 관한 연구)

  • Kim, Mun-Seok;Nam, Se-Jin;Shin, Dong-Wook
    • Annual Conference on Human and Language Technology
    • /
    • 1996.10a
    • /
    • pp.424-429
    • /
    • 1996
  • 본 논문에는 일반문서를 대상으로 하여 그 문사를 하이퍼텍스트(hypertext)로 자동변환하는 기법을 제안하고자 한다. 자동변환의 과정은 대상 문서에서 키워드(keyword)의 인식, 문서를 노드(node) 단위로 분리, 키워드로부터 노드로의 링크(ink) 생성의 3 단계로 이루어 진다. 기존의 연구에서는 문서에서 노드를 분리하는데 구문적 유사도만을 이용하는데, 본 논문에서는 양질의 하이퍼텍스트를 생성하기 위하여 구문적 유사도(syntactic similarity)뿐만 아니라 의미적 유사도(semantic similarity)를 사용한다. 구문적 유사도는 tf*idf와 벡터 곱(vector product)을 이용하고, 의미적 유사도는 시소러스(thesaurus)와 부분부합(partial match)을 이용하여 계산되어 진다. 또 링크 생성시 잘못된 링크의 생성을 막기 위하여 시소러스를 이용하여 시소러스에 존재하는 용어에 한해서 링크를 생성한다.

  • PDF