• Title/Summary/Keyword: kNN Algorithm

Search Result 273, Processing Time 0.02 seconds

An Efficient kNN Algorithm (효율적인 kNN 알고리즘)

  • Lee Jae Moon
    • The KIPS Transactions:PartB
    • /
    • v.11B no.7 s.96
    • /
    • pp.849-854
    • /
    • 2004
  • This paper proposes an algorithm to enhance the execution time of kNN in the document classification. The proposed algorithm is to enhance the execution time by minimizing the computing cost of the similarity between two documents by using the list of pairs, while the conventional kNN uses the iist of pairs. The 1ist of pairs can be obtained by applying the matrix transposition to the list of pairs at the training phase of the document classification. This paper analyzed the proposed algorithm in the time complexity and compared it with the conventional kNN. And it compared the proposed algorithm with the conventional kNN by using routers-21578 data experimentally. The experimental results show that the proposed algorithm outperforms kNN about $90{\%}$ in terms of the ex-ecution time.

Fast k-NN based Malware Analysis in a Massive Malware Environment

  • Hwang, Jun-ho;Kwak, Jin;Lee, Tae-jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.12
    • /
    • pp.6145-6158
    • /
    • 2019
  • It is a challenge for the current security industry to respond to a large number of malicious codes distributed indiscriminately as well as intelligent APT attacks. As a result, studies using machine learning algorithms are being conducted as proactive prevention rather than post processing. The k-NN algorithm is widely used because it is intuitive and suitable for handling malicious code as unstructured data. In addition, in the malicious code analysis domain, the k-NN algorithm is easy to classify malicious codes based on previously analyzed malicious codes. For example, it is possible to classify malicious code families or analyze malicious code variants through similarity analysis with existing malicious codes. However, the main disadvantage of the k-NN algorithm is that the search time increases as the learning data increases. We propose a fast k-NN algorithm which improves the computation speed problem while taking the value of the k-NN algorithm. In the test environment, the k-NN algorithm was able to perform with only the comparison of the average of similarity of 19.71 times for 6.25 million malicious codes. Considering the way the algorithm works, Fast k-NN algorithm can also be used to search all data that can be vectorized as well as malware and SSDEEP. In the future, it is expected that if the k-NN approach is needed, and the central node can be effectively selected for clustering of large amount of data in various environments, it will be possible to design a sophisticated machine learning based system.

k-NN Join Based on LSH in Big Data Environment

  • Ji, Jiaqi;Chung, Yeongjee
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.2
    • /
    • pp.99-105
    • /
    • 2018
  • k-Nearest neighbor join (k-NN Join) is a computationally intensive algorithm that is designed to find k-nearest neighbors from a dataset S for every object in another dataset R. Most related studies on k-NN Join are based on single-computer operations. As the data dimensions and data volume increase, running the k-NN Join algorithm on a single computer cannot generate results quickly. To solve this scalability problem, we introduce the locality-sensitive hashing (LSH) k-NN Join algorithm implemented in Spark, an approach for high-dimensional big data. LSH is used to map similar data onto the same bucket, which can reduce the data search scope. In order to achieve parallel implementation of the algorithm on multiple computers, the Spark framework is used to accelerate the computation of distances between objects in a cluster. Results show that our proposed approach is fast and accurate for high-dimensional and big data.

A MapReduce-based kNN Join Query Processing Algorithm for Analyzing Large-scale Data (대용량 데이터 분석을 위한 맵리듀스 기반 kNN join 질의처리 알고리즘)

  • Lee, HyunJo;Kim, TaeHoon;Chang, JaeWoo
    • Journal of KIISE
    • /
    • v.42 no.4
    • /
    • pp.504-511
    • /
    • 2015
  • Recently, the amount of data is rapidly increasing with the popularity of the SNS and the development of mobile technology. So, it has been actively studied for the effective data analysis schemes of the large amounts of data. One of the typical schemes is a Voronoi diagram based on kNN join algorithm (VkNN-join) using MapReduce. For two datasets R and S, VkNN-join can reduce the time of the join query processing involving big data because it selects the corresponding subset Sj for each Ri and processes the query with them. However, VkNN-join requires a high computational cost for constructing the Voronoi diagram. Moreover, the computational overhead of the VkNN-join is high because the number of the candidate cells increases as the value of the k increases. In order to solve these problems, we propose a MapReduce-based kNN-join query processing algorithm for analyzing the large amounts of data. Using the seed-based dynamic partitioning, our algorithm can reduce the overhead for constructing the index structure. Also, it can reduce the computational overhead to find the candidate partitions by selecting corresponding partitions with the average distance between two seeds. We show that our algorithm has better performance than the existing scheme in terms of the query processing time.

A Study on the Storage Requirement and Incremental Learning of the k-NN Classifier (K_NN 분류기의 메모리 사용과 점진적 학습에 대한 연구)

  • 이형일;윤충화
    • The Journal of Information Technology
    • /
    • v.1 no.1
    • /
    • pp.65-84
    • /
    • 1998
  • The MBR (Memory Based Reasoning) is a supervised learning method that utilizes the distances among the input and trained patterns in its classification, and is also called a distance based learning algorithm. The MBR is based on the k-NN classifier, in which teaming is performed by simply storing training patterns in the memory without any further processing. This paper proposes a new learning algorithm which is more efficient than the traditional k-NN classifier and has incremental learning capability, Furthermore, our proposed algorithm is insensitive to noisy patterns, and guarantees more efficient memory usage.

  • PDF

kNN Query Processing Algorithm based on the Encrypted Index for Hiding Data Access Patterns (데이터 접근 패턴 은닉을 지원하는 암호화 인덱스 기반 kNN 질의처리 알고리즘)

  • Kim, Hyeong-Il;Kim, Hyeong-Jin;Shin, Youngsung;Chang, Jae-woo
    • Journal of KIISE
    • /
    • v.43 no.12
    • /
    • pp.1437-1457
    • /
    • 2016
  • In outsourced databases, the cloud provides an authorized user with querying services on the outsourced database. However, sensitive data, such as financial or medical records, should be encrypted before being outsourced to the cloud. Meanwhile, k-Nearest Neighbor (kNN) query is the typical query type which is widely used in many fields and the result of the kNN query is closely related to the interest and preference of the user. Therefore, studies on secure kNN query processing algorithms that preserve both the data privacy and the query privacy have been proposed. However, existing algorithms either suffer from high computation cost or leak data access patterns because retrieved index nodes and query results are disclosed. To solve these problems, in this paper we propose a new kNN query processing algorithm on the encrypted database. Our algorithm preserves both data privacy and query privacy. It also hides data access patterns while supporting efficient query processing. To achieve this, we devise an encrypted index search scheme which can perform data filtering without revealing data access patterns. Through the performance analysis, we verify that our proposed algorithm shows better performance than the existing algorithms in terms of query processing times.

Nonlinear control of structure using neuro-predictive algorithm

  • Baghban, Amir;Karamodin, Abbas;Haji-Kazemi, Hasan
    • Smart Structures and Systems
    • /
    • v.16 no.6
    • /
    • pp.1133-1145
    • /
    • 2015
  • A new neural network (NN) predictive controller (NNPC) algorithm has been developed and tested in the computer simulation of active control of a nonlinear structure. In the present method an NN is used as a predictor. This NN has been trained to predict the future response of the structure to determine the control forces. These control forces are calculated by minimizing the difference between the predicted and desired responses via a numerical minimization algorithm. Since the NNPC is very time consuming and not suitable for real-time control, it is then used to train an NN controller. To consider the effectiveness of the controller on probability of damage, fragility curves are generated. The approach is validated by using simulated response of a 3 story nonlinear benchmark building excited by several historical earthquake records. The simulation results are then compared with a linear quadratic Gaussian (LQG) active controller. The results indicate that the proposed algorithm is completely effective in relative displacement reduction.

An Improvement Of Spatial Partitioning Method For Flocking Behaviors By Using Previous k-Nearest Neighbors (이전 k 개의 가장 가까운 이웃을 이용한 무리 짓기에 대한 공간분할 방법의 개선)

  • Lee, Jae-Moon
    • Journal of Korea Game Society
    • /
    • v.9 no.2
    • /
    • pp.115-123
    • /
    • 2009
  • This paper proposes an algorithm to improve the performance of the spatial partitioning method for flocking behaviors. The core concept is to improve the performance by using the fact that even if a moving entity, boid in flock continuously changes its direction and position, its k-nearest neighbors, kNN to effect on decision of the next direction is not changed frequently. From the previous kNN, the method to check whether new kNN is changed or not is proposed in this paper and then the correctness of the proposed method is proved by two theorems. The proposed algorithm was implemented and its performance was compared with the conventional spatial partitioning method. The results of the comparison show that the proposed algorithm outperforms the conventional one by about 30% with respect to the number of frames per a second.

  • PDF

A New Memory-Based Reasoning Algorithm using the Recursive Partition Averaging (재귀 분할 평균 법을 이용한 새로운 메모리기반 추론 알고리즘)

  • Lee, Hyeong-Il;Jeong, Tae-Seon;Yun, Chung-Hwa;Gang, Gyeong-Sik
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.7
    • /
    • pp.1849-1857
    • /
    • 1999
  • We proposed the RPA (Recursive Partition Averaging) method in order to improve the storage requirement and classification rate of the Memory Based Reasoning. This algorithm recursively partitions the pattern space until each hyperrectangle contains only those patterns of the same class, then it computes the average values of patterns in each hyperrectangle to extract a representative. Also we have used the mutual information between the features and classes as weights for features to improve the classification performance. The proposed algorithm used 30~90% of memory space that is needed in the k-NN (k-Nearest Neighbors) classifier, and showed a comparable classification performance to the k-NN. Also, by reducing the number of stored patterns, it showed an excellent result in terms of classification time when we compare it to the k-NN.

  • PDF

Grid-based Index Generation and k-nearest-neighbor Join Query-processing Algorithm using MapReduce (맵리듀스를 이용한 그리드 기반 인덱스 생성 및 k-NN 조인 질의 처리 알고리즘)

  • Jang, Miyoung;Chang, Jae Woo
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1303-1313
    • /
    • 2015
  • MapReduce provides high levels of system scalability and fault tolerance for large-size data processing. A MapReduce-based k-nearest-neighbor(k-NN) join algorithm seeks to produce the k nearest-neighbors of each point of a dataset from another dataset. The algorithm has been considered important in bigdata analysis. However, the existing k-NN join query-processing algorithm suffers from a high index-construction cost that makes it unsuitable for the processing of bigdata. To solve the corresponding problems, we propose a new grid-based, k-NN join query-processing algorithm. Our algorithm retrieves only the neighboring data from a query cell and sends them to each MapReduce task, making it possible to improve the overhead data transmission and computation. Our performance analysis shows that our algorithm outperforms the existing scheme by up to seven-fold in terms of the query-processing time, while also achieving high extent of query-result accuracy.