• Title/Summary/Keyword: k 최근접 이웃

Search Result 145, Processing Time 0.023 seconds

A Batch Processing Algorithm for Moving k-Nearest Neighbor Queries in Dynamic Spatial Networks

  • Cho, Hyung-Ju
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.63-74
    • /
    • 2021
  • Location-based services (LBSs) are expected to process a large number of spatial queries, such as shortest path and k-nearest neighbor queries that arrive simultaneously at peak periods. Deploying more LBS servers to process these simultaneous spatial queries is a potential solution. However, this significantly increases service operating costs. Recently, batch processing solutions have been proposed to process a set of queries using shareable computation. In this study, we investigate the problem of batch processing moving k-nearest neighbor (MkNN) queries in dynamic spatial networks, where the travel time of each road segment changes frequently based on the traffic conditions. LBS servers based on one-query-at-a-time processing often fail to process simultaneous MkNN queries because of the significant number of redundant computations. We aim to improve the efficiency algorithmically by processing MkNN queries in batches and reusing sharable computations. Extensive evaluation using real-world roadmaps shows the superiority of our solution compared with state-of-the-art methods.

Empirical Analysis of K-Nearest Neighbor Recommendation Engine using Vector Similarity (K-최근접 이웃 추천 엔진에서의 벡터 유사도 사용에 대한 실험적 분석)

  • 김혜재;손기락
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04b
    • /
    • pp.103-105
    • /
    • 2001
  • 인터넷 사용 인구의 폭증으로 인터넷 사이트가 경쟁적으로 유용한 각종 정보를 사용자들에게 제공하여 보다 많은 수의 회원을 확보하기 위해 노력하고 있지만 여러 사이트를 동시에 사용하고 있는 대부분의 인터넷 사용자들에게는 각 사이트에서 날아드는 정보를 매번 일일이 검색해야 하는 일이 여간 번거롭지 않을 뿐만 아니라 이런 무분별하고 획일적인 정보 서비스는 오히려 사용자들의 인터넷 사용을 불편하게 하며 더욱이 그 내용이 관심 밖의 것이 경우 네트워크의 효율적인 사용을 저해하는 정보공해에 지나지 않게 된다. 추천엔진은 기본으로 끊임없이 유입되는 다량의 정보 중에서 필요한 것을 추천해 주는 것이다. 이에 본 논문에서는 사용자들에게 필요한 정보만을 효율적으로 전달 해주기 위해서 먼저 개인화된 정보의 전달을 위해 사용자의취향을 파악하여 선택 가능성이 높은 항목을 예측할 수 있어야 한다. 그리고 사용자와 가까운 K 명의 사용자들을 효율적으로 검색하기 위해서 K-최근접 이웃 방식을 사용하고 인덱싱을 사용할 수 있는 세가지 벡터 유사도를 기존의 피어슨 상관계수(Pearson Correlation)와 비교하여 제안한다. 이를 통해 정보의 효율적인 제공방법, 즉 일반적인 검색으로 인한 정보의 제공이 아닌 일반 사용자들의 추천에 의해 정보를 제공하는 K-최근접 이웃 추천 엔진을 세가지 벡터 유사도를 이용해서 분석한다.

  • PDF

Optimal k-Nearest Neighborhood Classifier Using Genetic Algorithm (유전알고리즘을 이용한 최적 k-최근접이웃 분류기)

  • Park, Chong-Sun;Huh, Kyun
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.1
    • /
    • pp.17-27
    • /
    • 2010
  • Feature selection and feature weighting are useful techniques for improving the classification accuracy of k-Nearest Neighbor (k-NN) classifier. The main propose of feature selection and feature weighting is to reduce the number of features, by eliminating irrelevant and redundant features, while simultaneously maintaining or enhancing classification accuracy. In this paper, a novel hybrid approach is proposed for simultaneous feature selection, feature weighting and choice of k in k-NN classifier based on Genetic Algorithm. The results have indicated that the proposed algorithm is quite comparable with and superior to existing classifiers with or without feature selection and feature weighting capability.

Classification of Heart Disease Using K-Nearest Neighbor Imputation (K-최근접 이웃 알고리즘을 활용한 심장병 진단 및 예측)

  • Park, Pyoung-Woo;Lee, Seok-Won
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.742-745
    • /
    • 2017
  • 본 논문은 심장질환 도메인에 데이터 마이닝 기법을 적용한 연구로, 기존 환자의 정보에 대하여 K-최근접 이웃 알고리즘을 통해 결측 값을 대체하고, 대표적인 예측 분류기인 나이브 베이지안, 소포트 벡터 머신, 그리고 다층 퍼셉트론을 적용하여 각각 결과를 비교 및 분석한다. 본 연구의 실험은 K 최적화 과정을 포함하고 10-겹 교차 검증 방식으로 수행되었으며, 비교 및 분석은 정확도와 카파 통계치를 통해 판별한다.

Formation of Nearest Neighbors Set Based on Similarity Threshold (유사도 임계치에 근거한 최근접 이웃 집합의 구성)

  • Lee, Jae-Sik;Lee, Jin-Chun
    • Journal of Intelligence and Information Systems
    • /
    • v.13 no.2
    • /
    • pp.1-14
    • /
    • 2007
  • Case-based reasoning (CBR) is one of the most widely applied data mining techniques and has proven its effectiveness in various domains. Since CBR is basically based on k-Nearest Neighbors (NN) method, the value of k affects the performance of CBR model directly. Once the value of k is set, it is fixed for the lifetime of the CBR model. However, if the value is set greater or smaller than the optimal value, the performance of CBR model will be deteriorated. In this research, we propose a new method of composing the NN set using similarity scores as themselves, which we shall call s-NN method, rather than using the fixed value of k. In the s-NN method, the different number of nearest neighbors can be selected for each new case. Performance evaluation using the data from UCI Machine Learning Repository shows that the CBR model adopting the s-NN method outperforms the CBR model adopting the traditional k-NN method.

  • PDF

Status Diagnosis of Pump and Motor Applying K-Nearest Neighbors (K-최근접 이웃 알고리즘을 적용한 펌프와 모터의 상태 진단)

  • Kim, Nam-Jin;Bae, Young-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.6
    • /
    • pp.1249-1256
    • /
    • 2018
  • Recently the research on artificial intelligence is actively processing in the fields of diagnosis and prediction. In this paper, we acquire the data of electrical current, revolution per minute (RPM) and vibration that is occurred in the motor and pump where hey are installed in the industrial fields. We train the acquired data by using the k-nearest neighbors. Also, we propose the status diagnosis methods that judges normal and abnormal status of motor and pump by using the trained data. As a proposed result, we confirm that normal status and abnormal status are well judged.

A Missing Data Imputation by Combining K Nearest Neighbor with Maximum Likelihood Estimation for Numerical Software Project Data (K-NN과 최대 우도 추정법을 결합한 소프트웨어 프로젝트 수치 데이터용 결측값 대치법)

  • Lee, Dong-Ho;Yoon, Kyung-A;Bae, Doo-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.4
    • /
    • pp.273-282
    • /
    • 2009
  • Missing data is one of the common problems in building analysis or prediction models using software project data. Missing imputation methods are known to be more effective missing data handling method than deleting methods in small software project data. While K nearest neighbor imputation is a proper missing imputation method in the software project data, it cannot use non-missing information of incomplete project instances. In this paper, we propose an approach to missing data imputation for numerical software project data by combining K nearest neighbor and maximum likelihood estimation; we also extend the average absolute error measure by normalization for accurate evaluation. Our approach overcomes the limitation of K nearest neighbor imputation and outperforms on our real data sets.

Calculating Attribute Weights in K-Nearest Neighbor Algorithms using Information Theory (정보이론을 이용한 K-최근접 이웃 알고리즘에서의 속성 가중치 계산)

  • Lee Chang-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.9
    • /
    • pp.920-926
    • /
    • 2005
  • Nearest neighbor algorithms classify an unseen input instance by selecting similar cases and use the discovered membership to make predictions about the unknown features of the input instance. The usefulness of the nearest neighbor algorithms have been demonstrated sufficiently in many real-world domains. In nearest neighbor algorithms, it is an important issue to assign proper weights to the attributes. Therefore, in this paper, we propose a new method which can automatically assigns to each attribute a weight of its importance with respect to the target attribute. The method has been implemented as a computer program and its effectiveness has been tested on a number of machine learning databases publicly available.

Missing values imputation for time course gene expression data using the pattern consistency index adaptive nearest neighbors (시간경로 유전자 발현자료에서 패턴일치지수와 적응 최근접 이웃을 활용한 결측값 대치법)

  • Shin, Heyseo;Kim, Dongjae
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.3
    • /
    • pp.269-280
    • /
    • 2020
  • Time course gene expression data is a large amount of data observed over time in microarray experiments. This data can also simultaneously identify the level of gene expression. However, the experiment process is complex, resulting in frequent missing values due to various causes. In this paper, we propose a pattern consistency index adaptive nearest neighbors as a method of missing value imputation. This method combines the adaptive nearest neighbors (ANN) method that reflects local characteristics and the pattern consistency index that considers consistent degree for gene expression between observations over time points. We conducted a Monte Carlo simulation study to evaluate the usefulness of proposed the pattern consistency index adaptive nearest neighbors (PANN) method for two yeast time course data.

On the Use of Weighted k-Nearest Neighbors for Missing Value Imputation (Weighted k-Nearest Neighbors를 이용한 결측치 대치)

  • Lim, Chanhui;Kim, Dongjae
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.1
    • /
    • pp.23-31
    • /
    • 2015
  • A conventional missing value problem in the statistical analysis k-Nearest Neighbor(KNN) method are used for a simple imputation method. When one of the k-nearest neighbors is an extreme value or outlier, the KNN method can create a bias. In this paper, we propose a Weighted k-Nearest Neighbors(WKNN) imputation method that can supplement KNN's faults. A Monte-Carlo simulation study is also adapted to compare the WKNN method and KNN method using real data set.