• 제목/요약/키워드: k-mean clustering algorithm

검색결과 119건 처리시간 0.021초

Geodesic Clustering for Covariance Matrices

  • Lee, Haesung;Ahn, Hyun-Jung;Kim, Kwang-Rae;Kim, Peter T.;Koo, Ja-Yong
    • Communications for Statistical Applications and Methods
    • /
    • 제22권4호
    • /
    • pp.321-331
    • /
    • 2015
  • The K-means clustering algorithm is a popular and widely used method for clustering. For covariance matrices, we consider a geodesic clustering algorithm based on the K-means clustering framework in consideration of symmetric positive definite matrices as a Riemannian (non-Euclidean) manifold. This paper considers a geodesic clustering algorithm for data consisting of symmetric positive definite (SPD) matrices, utilizing the Riemannian geometric structure for SPD matrices and the idea of a K-means clustering algorithm. A K-means clustering algorithm is divided into two main steps for which we need a dissimilarity measure between two matrix data points and a way of computing centroids for observations in clusters. In order to use the Riemannian structure, we adopt the geodesic distance and the intrinsic mean for symmetric positive definite matrices. We demonstrate our proposed method through simulations as well as application to real financial data.

On hierarchical clustering in sufficient dimension reduction

  • Yoo, Chaeyeon;Yoo, Younju;Um, Hye Yeon;Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • 제27권4호
    • /
    • pp.431-443
    • /
    • 2020
  • The K-means clustering algorithm has had successful application in sufficient dimension reduction. Unfortunately, the algorithm does have reproducibility and nestness, which will be discussed in this paper. These are clear deficits for the K-means clustering algorithm; however, the hierarchical clustering algorithm has both reproducibility and nestness, but intensive comparison between K-means and hierarchical clustering algorithm has not yet been done in a sufficient dimension reduction context. In this paper, we rigorously study the two clustering algorithms for two popular sufficient dimension reduction methodology of inverse mean and clustering mean methods throughout intensive numerical studies. Simulation studies and two real data examples confirm that the use of hierarchical clustering algorithm has a potential advantage over the K-means algorithm.

K-means Clustering using Grid-based Representatives

  • Park, Hee-Chang;Lee, Sun-Myung
    • Journal of the Korean Data and Information Science Society
    • /
    • 제16권4호
    • /
    • pp.759-768
    • /
    • 2005
  • K-means clustering has been widely used in many applications, such that pattern analysis, data analysis, market research and so on. It can identify dense and sparse regions among data attributes or object attributes. But k-means algorithm requires many hours to get k clusters, because it is more primitive and explorative. In this paper we propose a new method of k-means clustering using the grid-based representative value(arithmetic and trimmed mean) for sample. It is more fast than any traditional clustering method and maintains its accuracy.

  • PDF

Image compression using K-mean clustering algorithm

  • Munshi, Amani;Alshehri, Asma;Alharbi, Bayan;AlGhamdi, Eman;Banajjar, Esraa;Albogami, Meznah;Alshanbari, Hanan S.
    • International Journal of Computer Science & Network Security
    • /
    • 제21권9호
    • /
    • pp.275-280
    • /
    • 2021
  • With the development of communication networks, the processes of exchanging and transmitting information rapidly developed. As millions of images are sent via social media every day, also wireless sensor networks are now used in all applications to capture images such as those used in traffic lights, roads and malls. Therefore, there is a need to reduce the size of these images while maintaining an acceptable degree of quality. In this paper, we use Python software to apply K-mean Clustering algorithm to compress RGB images. The PSNR, MSE, and SSIM are utilized to measure the image quality after image compression. The results of compression reduced the image size to nearly half the size of the original images using k = 64. In the SSIM measure, the higher the K, the greater the similarity between the two images which is a good indicator to a significant reduction in image size. Our proposed compression technique powered by the K-Mean clustering algorithm is useful for compressing images and reducing the size of images.

Medoid Determination in Deterministic Annealing-based Pairwise Clustering

  • Lee, Kyung-Mi;Lee, Keon-Myung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제11권3호
    • /
    • pp.178-183
    • /
    • 2011
  • The deterministic annealing-based clustering algorithm is an EM-based algorithm which behaves like simulated annealing method, yet less sensitive to the initialization of parameters. Pairwise clustering is a kind of clustering technique to perform clustering with inter-entity distance information but not enforcing to have detailed attribute information. The pairwise deterministic annealing-based clustering algorithm repeatedly alternates the steps of estimation of mean-fields and the update of membership degrees of data objects to clusters until termination condition holds. Lacking of attribute value information, pairwise clustering algorithms do not explicitly determine the centroids or medoids of clusters in the course of clustering process or at the end of the process. This paper proposes a method to identify the medoids as the centers of formed clusters for the pairwise deterministic annealing-based clustering algorithm. Experimental results show that the proposed method locate meaningful medoids.

국부 확률을 이용한 데이터 분류에 관한 연구 (A Study on Data Clustering Method Using Local Probability)

  • 손창호;최원호;이재국
    • 제어로봇시스템학회논문지
    • /
    • 제13권1호
    • /
    • pp.46-51
    • /
    • 2007
  • In this paper, we propose a new data clustering method using local probability and hypothesis theory. To cluster the test data set we analyze the local area of the test data set using local probability distribution and decide the candidate class of the data set using mean standard deviation and variance etc. To decide each class of the test data, statistical hypothesis theory is applied to the decided candidate class of the test data set. For evaluating, the proposed classification method is compared to the conventional fuzzy c-mean method, k-means algorithm and Discriminator analysis algorithm. The simulation results show more accuracy than results of fuzzy c-mean method, k-means algorithm and Discriminator analysis algorithm.

Design and Comparison of Error Correctors Using Clustering in Holographic Data Storage System

  • Kim, Sang-Hoon;Kim, Jang-Hyun;Yang, Hyun-Seok;Park, Young-Pil
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.1076-1079
    • /
    • 2005
  • Data storage related with writing and retrieving requires high storage capacity, fast transfer rate and less access time in. Today any data storage system can not satisfy these conditions, but holographic data storage system can perform faster data transfer rate because it is a page oriented memory system using volume hologram in writing and retrieving data. System architecture without mechanical actuating part is possible, so fast data transfer rate and high storage capacity about 1Tb/cm3 can be realized. In this paper, to correct errors of binary data stored in holographic digital data storage system, find cluster centers using clustering algorithm and reduce intensities of pixels around centers. We archive the procedure by two algorithms of C-mean and subtractive clustering, and compare the results of the two algorithms. By using proper clustering algorithm, the intensity profile of data page will be uniform and the better data storage system can be realized.

  • PDF

K-means Clustering using a Grid-based Representatives

  • 박희창;이선명
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 한국데이터정보과학회 2003년도 추계학술대회
    • /
    • pp.229-238
    • /
    • 2003
  • K-means clustering has been widely used in many applications, such that pattern analysis, data analysis, market research and so on. It can identify dense and sparse regions among data attributes or object attributes. But k-means algorithm requires many hours to get k clusters, because it is more primitive and explorative. In this paper we propose a new method of k-means clustering using the grid-based representative value(arithmetic and trimmed mean) for sample. It is more fast than any traditional clustering method and maintains its accuracy.

  • PDF

클러스터링 성능평가: 신경망 및 통계적 방법 (A Study on Performance Evaluation of Clustering Algorithms using Neural and Statistical Method)

  • 윤석환;신용백
    • 기술사
    • /
    • 제29권2호
    • /
    • pp.71-79
    • /
    • 1996
  • This paper evaluates the clustering performance of a neural network and a statistical method. Algorithms which are used in this paper are the GLVQ(Generalized Loaming vector Quantization) for a neural method and the k -means algorithm for a statistical clustering method. For comparison of two methods, we calculate the Rand's c statistics. As a result, the mean of c value obtained with the GLVQ is higher than that obtained with the k -means algorithm, while standard deviation of c value is lower. Experimental data sets were the Fisher's IRIS data and patterns extracted from handwritten numerals.

  • PDF

클러스터 타당성 평가기준을 이용한 최적의 클러스터 수 결정을 위한 고속 탐색 알고리즘 (Fast Search Algorithm for Determining the Optimal Number of Clusters using Cluster Validity Index)

  • 이상욱
    • 한국콘텐츠학회논문지
    • /
    • 제9권9호
    • /
    • pp.80-89
    • /
    • 2009
  • 클러스터링 알고리즘에서 최적의 클러스터 수를 결정하기 위한 효율적인 고속 탐색 알고리즘을 소개한다. 제안하는 방법은 클러스터링 적합도의 척도로 사용되는 클러스터 타당성 평가기준을 토대로 한다. 데이터 집합에 클러스터링 프로세스를 진행하여 최적의 클러스터 형상에 도달하게 되면 클러스터 타당성 평가기준은 최대 혹은 최소값을 가질 것으로 기대한다. 본 논문에서는 최적의 클러스터 개수를 찾기 위한 고속의 비소모적 탐색 방법을 설계하고 실제 클러스터링과 접목한다. 제안하는 알고리즘은 k-means++ 클러스터링 알고리즘에 적용하였고, 클러스터 타당성 평가기준으로써 CB 및 PBM 타당성 평가기준 방법을 사용하였다. 몇몇의 가상 데이터 집합과 실제 데이터 집합에 실험한 결과, 제안하는 방법은 정확도의 손실 없이 계산 효율을 획기적으로 증가시킴을 보여주었다.