• 제목/요약/키워드: Ensemble clustering

검색결과 37건 처리시간 0.026초

A new Ensemble Clustering Algorithm using a Reconstructed Mapping Coefficient

  • Cao, Tuoqia;Chang, Dongxia;Zhao, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권7호
    • /
    • pp.2957-2980
    • /
    • 2020
  • Ensemble clustering commonly integrates multiple basic partitions to obtain a more accurate clustering result than a single partition. Specifically, it exists an inevitable problem that the incomplete transformation from the original space to the integrated space. In this paper, a novel ensemble clustering algorithm using a newly reconstructed mapping coefficient (ECRMC) is proposed. In the algorithm, a newly reconstructed mapping coefficient between objects and micro-clusters is designed based on the principle of increasing information entropy to enhance effective information. This can reduce the information loss in the transformation from micro-clusters to the original space. Then the correlation of the micro-clusters is creatively calculated by the Spearman coefficient. Therefore, the revised co-association graph between objects can be built more accurately because the supplementary information can well ensure the completeness of the whole conversion process. Experiment results demonstrate that the ECRMC clustering algorithm has high performance, effectiveness, and feasibility.

Design and Implementation of the Ensemble-based Classification Model by Using k-means Clustering

  • Song, Sung-Yeol;Khil, A-Ra
    • 한국컴퓨터정보학회논문지
    • /
    • 제20권10호
    • /
    • pp.31-38
    • /
    • 2015
  • In this paper, we propose the ensemble-based classification model which extracts just new data patterns from the streaming-data by using clustering and generates new classification models to be added to the ensemble in order to reduce the number of data labeling while it keeps the accuracy of the existing system. The proposed technique performs clustering of similar patterned data from streaming data. It performs the data labeling to each cluster at the point when a certain amount of data has been gathered. The proposed technique applies the K-NN technique to the classification model unit in order to keep the accuracy of the existing system while it uses a small amount of data. The proposed technique is efficient as using about 3% less data comparing with the existing technique as shown the simulation results for benchmarks, thereby using clustering.

An Efficient Deep Learning Ensemble Using a Distribution of Label Embedding

  • Park, Saerom
    • 한국컴퓨터정보학회논문지
    • /
    • 제26권1호
    • /
    • pp.27-35
    • /
    • 2021
  • 본 연구에서는 레이블 임베딩의 분포를 반영하는 딥러닝 모형을 위한 새로운 스태킹 앙상블 방법론을 제안하였다. 제안된 앙상블 방법론은 기본 딥러닝 분류기를 학습하는 과정과 학습된 모형으로 부터 얻어진 레이블 임베딩을 이용한 군집화 결과로부터 소분류기들을 학습하는 과정으로 이루어져 있다. 본 방법론은 주어진 다중 분류 문제를 군집화 결과를 활용하여 소 문제들로 나누는 것을 기본으로 한다. 군집화에 사용되는 레이블 임베딩은 처음 학습한 기본 딥러닝 분류기의 마지막 층의 가중치로부터 얻어질 수 있다. 군집화 결과를 기반으로 군집화 내의 클래스들을 분류하는 소분류기들을 군집의 수만큼 구축하여 학습한다. 실험 결과 기본 분류기로부터의 레이블 임베딩이 클래스 간의 관계를 잘 반영한다는 것을 확인하였고, 이를 기반으로 한 앙상블 방법론이 CIFAR 100 데이터에 대해서 분류 성능을 향상시킬 수 있다는 것을 확인할 수 있었다.

다구찌 디자인을 이용한 앙상블 및 군집분석 분류 성능 비교 (Comparing Classification Accuracy of Ensemble and Clustering Algorithms Based on Taguchi Design)

  • 신형원;손소영
    • 대한산업공학회지
    • /
    • 제27권1호
    • /
    • pp.47-53
    • /
    • 2001
  • In this paper, we compare the classification performances of both ensemble and clustering algorithms (Data Bagging, Variable Selection Bagging, Parameter Combining, Clustering) to logistic regression in consideration of various characteristics of input data. Four factors used to simulate the logistic model are (1) correlation among input variables (2) variance of observation (3) training data size and (4) input-output function. In view of the unknown relationship between input and output function, we use a Taguchi design to improve the practicality of our study results by letting it as a noise factor. Experimental study results indicate the following: When the level of the variance is medium, Bagging & Parameter Combining performs worse than Logistic Regression, Variable Selection Bagging and Clustering. However, classification performances of Logistic Regression, Variable Selection Bagging, Bagging and Clustering are not significantly different when the variance of input data is either small or large. When there is strong correlation in input variables, Variable Selection Bagging outperforms both Logistic Regression and Parameter combining. In general, Parameter Combining algorithm appears to be the worst at our disappointment.

  • PDF

고차원 데이터에서 One-class SVM과 Spectral Clustering을 이용한 이진 예측 이상치 탐지 방법 (A Binary Prediction Method for Outlier Detection using One-class SVM and Spectral Clustering in High Dimensional Data)

  • 박정희
    • 한국멀티미디어학회논문지
    • /
    • 제25권6호
    • /
    • pp.886-893
    • /
    • 2022
  • Outlier detection refers to the task of detecting data that deviate significantly from the normal data distribution. Most outlier detection methods compute an outlier score which indicates the degree to which a data sample deviates from normal. However, setting a threshold for an outlier score to determine if a data sample is outlier or normal is not trivial. In this paper, we propose a binary prediction method for outlier detection based on spectral clustering and one-class SVM ensemble. Given training data consisting of normal data samples, a clustering method is performed to find clusters in the training data, and the ensemble of one-class SVM models trained on each cluster finds the boundaries of the normal data. We show how to obtain a threshold for transforming outlier scores computed from the ensemble of one-class SVM models into binary predictive values. Experimental results with high dimensional text data show that the proposed method can be effectively applied to high dimensional data, especially when the normal training data consists of different shapes and densities of clusters.

K-평균 군집분석을 이용한 동아시아 지역 날씨유형 분류 (Classification of Weather Patterns in the East Asia Region using the K-means Clustering Analysis)

  • 조영준;이현철;임병환;김승범
    • 대기
    • /
    • 제29권4호
    • /
    • pp.451-461
    • /
    • 2019
  • Medium-range forecast is highly dependent on ensemble forecast data. However, operational weather forecasters have not enough time to digest all of detailed features revealed in ensemble forecast data. To utilize the ensemble data effectively in medium-range forecasting, representative weather patterns in East Asia in this study are defined. The k-means clustering analysis is applied for the objectivity of weather patterns. Input data used daily Mean Sea Level Pressure (MSLP) anomaly of the ECMWF ReAnalysis-Interim (ERA-Interim) during 1981~2010 (30 years) provided by the European Centre for Medium-Range Weather Forecasts (ECMWF). Using the Explained Variance (EV), the optimal study area is defined by 20~60°N, 100~150°E. The number of clusters defined by Explained Cluster Variance (ECV) is thirty (k = 30). 30 representative weather patterns with their frequencies are summarized. Weather pattern #1 occurred all seasons, but it was about 56% in summer (June~September). The relatively rare occurrence of weather pattern (#30) occurred mainly in winter. Additionally, we investigate the relationship between weather patterns and extreme weather events such as heat wave, cold wave, and heavy rainfall as well as snowfall. The weather patterns associated with heavy rainfall exceeding 110 mm day-1 were #1, #4, and #9 with days (%) of more than 10%. Heavy snowfall events exceeding 24 cm day-1 mainly occurred in weather pattern #28 (4%) and #29 (6%). High and low temperature events (> 34℃ and < -14℃) were associated with weather pattern #1~4 (14~18%) and #28~29 (27~29%), respectively. These results suggest that the classification of various weather patterns will be used as a reference for grouping all ensemble forecast data, which will be useful for the scenario-based medium-range ensemble forecast in the future.

사전정보를 활용한 앙상블 클러스터링 알고리즘 (An Ensemble Clustering Algorithm based on a Prior Knowledge)

  • 고송;김대원
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제36권2호
    • /
    • pp.109-121
    • /
    • 2009
  • 사전정보는 클러스터링 성능을 유도할 수 있는 요인이지만, 활용 방법에 따라 차이는 발생한다. 특히, 사전정보를 초기 중심으로 활용할 때, 사전정보 간 유사도에 대해 고려하는 것이 필요하다. 레이블이 같더라도 낮은 유사도를 갖는 사전정보로 인해 초기 중심 설정 시 문제가 발생할 수 있기 때문에, 이들을 구분하여 활용하는 방법이 필요하다. 따라서 본 논문은 낮은 유사도를 갖는 사전정보를 구분하여 문제를 해결하는 방법을 제시한다. 또한 유사도에 의해 구분된 사전정보는 다양하게 활용함으로써 생성되는 다양한 클러스터링 결과를 연관규칙에 기반하여 앙상블 함으로써 통합된 하나의 분석 결과를 도출하여 클러스터링 분석 성능을 더욱 개선시킬 수 있다.

입력자료 군집화에 따른 앙상블 머신러닝 모형의 수질예측 특성 연구 (The Effect of Input Variables Clustering on the Characteristics of Ensemble Machine Learning Model for Water Quality Prediction)

  • 박정수
    • 한국물환경학회지
    • /
    • 제37권5호
    • /
    • pp.335-343
    • /
    • 2021
  • Water quality prediction is essential for the proper management of water supply systems. Increased suspended sediment concentration (SSC) has various effects on water supply systems such as increased treatment cost and consequently, there have been various efforts to develop a model for predicting SSC. However, SSC is affected by both the natural and anthropogenic environment, making it challenging to predict SSC. Recently, advanced machine learning models have increasingly been used for water quality prediction. This study developed an ensemble machine learning model to predict SSC using the XGBoost (XGB) algorithm. The observed discharge (Q) and SSC in two fields monitoring stations were used to develop the model. The input variables were clustered in two groups with low and high ranges of Q using the k-means clustering algorithm. Then each group of data was separately used to optimize XGB (Model 1). The model performance was compared with that of the XGB model using the entire data (Model 2). The models were evaluated by mean squared error-ob servation standard deviation ratio (RSR) and root mean squared error. The RSR were 0.51 and 0.57 in the two monitoring stations for Model 2, respectively, while the model performance improved to RSR 0.46 and 0.55, respectively, for Model 1.

데이터융합, 앙상블과 클러스터링을 이용한 교통사고 심각도 분류분석 (Data Fusion, Ensemble and Clustering for the Severity Classification of Road Traffic Accident in Korea)

  • 손소영;이성호
    • 대한산업공학회지
    • /
    • 제26권4호
    • /
    • pp.354-362
    • /
    • 2000
  • Increasing amount of road tragic in 90's has drawn much attention in Korea due to its influence on safety problems. Various types of data analyses are done in order to analyze the relationship between the severity of road traffic accident and driving conditions based on traffic accident records. Accurate results of such accident data analysis can provide crucial information for road accident prevention policy. In this paper, we apply several data fusion, ensemble and clustering algorithms in an effort to increase the accuracy of individual classifiers for the accident severity. An empirical study results indicated that clustering works best for road traffic accident classification in Korea.

  • PDF

군집화와 유전 알고리즘을 이용한 거친-섬세한 분류기 앙상블 선택 (Coarse-to-fine Classifier Ensemble Selection using Clustering and Genetic Algorithms)

  • 김영원;오일석
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제34권9호
    • /
    • pp.857-868
    • /
    • 2007
  • 좋은 분류기 앙상블은 분류기간에 상호 보완성을 갖추어 높은 인식 성능을 보여야 하며, 크기가 작아 계산 효율이 좋아야 한다. 이 논문은 이러한 목적을 달성하기 위한 거친-섬세한 (coarse-to-fine)단계를 밟는 분류기 앙상블 선택 방법을 제안한다. 이 방법이 성공하기 위해서는 초기 분류기 풀 (pool)이 충분히 다양해야 한다. 이 논문에서는 여러 개의 서로 다른 분류 알고리즘과 아주 많은 수의 특징 부분집합을 결합하여 충분히 큰 분류기 풀을 생성한다. 거친 선택 단계에서는 분류기 풀의 크기를 적절하게 줄이는 것이 목적이다. 분류기 군집화 알고리즘을 사용하여 다양성을 최소로 희생하는 조건하에 분류기 풀의 크기를 줄인다. 섬세한 선택에서는 유전 알고리즘을 이용하여 최적의 앙상블을 찾는다. 또한 탐색 성능이 개선된 혼합 유전 알고리즘을 제안한다. 널리 사용되는 필기 숫자 데이타베이스를 이용하여 기존의 단일 단계 방법과 제안한 두 단계 방법의 성능을 비교한 결과 제안한 알고리즘이 우수함을 입증하였다.