• Title/Summary/Keyword: Data Partition Algorithm

Search Result 128, Processing Time 0.029 seconds

An Energy Efficient Unequal Clustering Algorithm for Wireless Sensor Networks (무선 센서 네트워크에서의 에너지 효율적인 불균형 클러스터링 알고리즘)

  • Lee, Sung-Ju;Kim, Sung-Chun
    • The KIPS Transactions:PartC
    • /
    • v.16C no.6
    • /
    • pp.783-790
    • /
    • 2009
  • The necessity of wireless sensor networks is increasing in the recent years. So many researches are studied in wireless sensor networks. The clustering algorithm provides an effective way to prolong the lifetime of the wireless sensor networks. The one-hop routing of LEACH algorithm is an inefficient way in the energy consumption of cluster-head, because it transmits a data to the BS(Base Station) with one-hop. On the other hand, other clustering algorithms transmit data to the BS with multi-hop, because the multi-hop transmission is an effective way. But the multi-hop routing of other clustering algorithms which transmits data to BS with multi-hop have a data bottleneck state problem. The unequal clustering algorithm solved a data bottleneck state problem by increasing the routing path. Most of the unequal clustering algorithms partition the nodes into clusters of unequal size, and clusters closer to the BS have small-size the those farther away from the BS. However, the energy consumption of cluster-head in unequal clustering algorithm is more increased than other clustering algorithms. In the thesis, I propose an energy efficient unequal clustering algorithm which decreases the energy consumption of cluster-head and solves the data bottleneck state problem. The basic idea is divided a three part. First of all I provide that the election of appropriate cluster-head. Next, I offer that the decision of cluster-size which consider the distance from the BS, the energy state of node and the number of neighborhood node. Finally, I provide that the election of assistant node which the transmit function substituted for cluster-head. As a result, the energy consumption of cluster-head is minimized, and the energy consumption of total network is minimized.

A Big Data Analysis by Between-Cluster Information using k-Modes Clustering Algorithm (k-Modes 분할 알고리즘에 의한 군집의 상관정보 기반 빅데이터 분석)

  • Park, In-Kyoo
    • Journal of Digital Convergence
    • /
    • v.13 no.11
    • /
    • pp.157-164
    • /
    • 2015
  • This paper describes subspace clustering of categorical data for convergence and integration. Because categorical data are not designed for dealing only with numerical data, The conventional evaluation measures are more likely to have the limitations due to the absence of ordering and high dimensional data and scarcity of frequency. Hence, conditional entropy measure is proposed to evaluate close approximation of cohesion among attributes within each cluster. We propose a new objective function that is used to reflect the optimistic clustering so that the within-cluster dispersion is minimized and the between-cluster separation is enhanced. We performed experiments on five real-world datasets, comparing the performance of our algorithms with four algorithms, using three evaluation metrics: accuracy, f-measure and adjusted Rand index. According to the experiments, the proposed algorithm outperforms the algorithms that were considered int the evaluation, regarding the considered metrics.

Bin Packing Algorithm for Equitable Partitioning Problem with Skill Levels (기량수준 동등분할 문제의 상자 채우기 알고리즘)

  • Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.2
    • /
    • pp.209-214
    • /
    • 2020
  • The equitable partitioning problem(EPP) is classified as [0/1] binary skill existence or nonexistence and integer skill levels such as [1,2,3,4,5]. There is well-known a polynomial-time optimal solution finding algorithm for binary skill EPP. On the other hand, tabu search a kind of metaheuristic has apply to integer skill level EPP is due to unknown polynomial-time algorithm for it and this problem is NP-hard. This paper suggests heuristic greedy algorithm with polynomial-time to find the optimal solution for integer skill level EPP. This algorithm descending sorts of skill level frequency for each field and decides the lower bound(LB) that more than the number of group, packing for each group bins first, than the students with less than LB allocates to each bin additionally. As a result of experimental data, this algorithm shows performance improvement than the result of tabu search.

Fuzzy Clustering Algorithm to Predict Cancer Class Using Gene Expression Data (유전자 발현 데이터를 이용한 암의 클래스 예측을 위한 퍼지 클러스터링 알고리즘)

  • 원홍희;유시호;조성배
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10b
    • /
    • pp.757-759
    • /
    • 2003
  • 암의 치료법은 같은 종류의 암이라 해도 그 하부 클래스에 따라 매우 다르기 때문에 암의 클래스를 예측하는 것은 그 정확한 치료를 위하여 매우 중요하다. 유전자 발현 데이터를 이용한 암의 분류에 있어 기존의 연구들은 각 데이터를 하나의 클러스터에 소속시키는 하드 분할(hard partition)에 의한 분할 방식을 사용하는 하드 클러스터링을 사용하였다. 하지만 일반적으로 유전자 발현 암 데이터와 같은 실세계의 데이터는 쉽게 나뉘어지기 힘들거나 클러스터 간의 경계가 분명하지 않기 때문에 하드 클러스터링 기법은 주어진 데이터의 성질을 손실시킬 수 있는데 반해, 퍼지 클러스터링 기법은 각 데이터가 소속 정도에 따라 여러 개의 클러스터에 속할 수 있도록 분할하기 때문에 이러한 손실을 최소화할 수 있다. 따라서 본 논문에서는 퍼지 클러스터링의 대표적인 방법인 fuzzy c-means 클러스터링을 적용하여 암의 클래스를 예측하고, 다양한 하드 클러스터링 방법과 비교함으로써 퍼지 클러스터링의 성능을 검증하였다.

  • PDF

IMAGE SYNTHESIS FOR DYNAMIC SCENES

  • Feng, Chen-Chin;Chang, Su-Yuan;Yang, Shi-Nine
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.15.1-21
    • /
    • 1999
  • Radiosity method is a global illumination model for image synthesis. It computes all energy interactions among diffuse elements in a virtual environment. One of the major drawbacks if its time consuming computation. Existing radiosity algorithms for static scene is difficult to be applicable to dynamic environments. In this paper we proposed an hierarchical scene partition scheme to speedup the link update computations in the dynamic environments. Since the proposed spatial data structure is global, it not only can be used to speedup the culling of non-affected links after geometry change, but also can be used to accelerate the subsequent visibility computation. Several empirical tests are given to show the efficiency of our improved algorithm.

Optimization of Layout Design in an AS/RS for Maximizing its Throughput Rate

  • Yang, M.H.
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.18 no.2
    • /
    • pp.109-121
    • /
    • 1992
  • In this paper, we address a layout design problem for determining a K-class-based dedicated storage layout in an automated storage retrieval system. K-class-based dedicated storage employs K zones in which lots from a class of products are stored randomly. Zones form a partition of storage locations. Our objective function is to minimize the expected single command travel time, which is expressed as a set function of space requirements for zones, average demand rates from classes, and one-way travel times from the pickup/deposit station to locations. We construct a heuristic algorithm based on analytical results and a local search method, the methodology deveolped can be used with easily-available data by warehouse planners to improve the throughput capacity of a conventional warehouse as well as an AS/RS.

  • PDF

Audio Segmentation and Classification Using Support Vector Machine and Fuzzy C-Means Clustering Techniques (서포트 벡터 머신과 퍼지 클러스터링 기법을 이용한 오디오 분할 및 분류)

  • Nguyen, Ngoc;Kang, Myeong-Su;Kim, Cheol-Hong;Kim, Jong-Myon
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.19-26
    • /
    • 2012
  • The rapid increase of information imposes new demands of content management. The purpose of automatic audio segmentation and classification is to meet the rising need for efficient content management. With this reason, this paper proposes a high-accuracy algorithm that segments audio signals and classifies them into different classes such as speech, music, silence, and environment sounds. The proposed algorithm utilizes support vector machine (SVM) to detect audio-cuts, which are boundaries between different kinds of sounds using the parameter sequence. We then extract feature vectors that are composed of statistical data and they are used as an input of fuzzy c-means (FCM) classifier to partition audio-segments into different classes. To evaluate segmentation and classification performance of the proposed SVM-FCM based algorithm, we consider precision and recall rates for segmentation and classification accuracy for classification. Furthermore, we compare the proposed algorithm with other methods including binary and FCM classifiers in terms of segmentation performance. Experimental results show that the proposed algorithm outperforms other methods in both precision and recall rates.

Fuzzy Inference Systems Based on FCM Clustering Algorithm for Nonlinear Process (비선형 공정을 위한 FCM 클러스터링 알고리즘 기반 퍼지 추론 시스템)

  • Park, Keon-Jun;Kang, Hyung-Kil;Kim, Yong-Kab
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.5 no.4
    • /
    • pp.224-231
    • /
    • 2012
  • In this paper, we introduce a fuzzy inference systems based on fuzzy c-means clustering algorithm for fuzzy modeling of nonlinear process. Typically, the generation of fuzzy rules for nonlinear processes have the problem that the number of fuzzy rules exponentially increases. To solve this problem, the fuzzy rules of fuzzy model are generated by partitioning the input space in the scatter form using FCM clustering algorithm. The premise parameters of the fuzzy rules are determined by membership matrix by means of FCM clustering algorithm. The consequence part of the rules is expressed in the form of polynomial functions and the coefficient parameters of each rule are determined by the standard least-squares method. And lastly, we evaluate the performance and the nonlinear characteristics using the data widely used in nonlinear process.

Low-Power Multiplier Using Input Data Partition (입력 데이터 분할을 이용한 저전력 부스 곱셈기 설계)

  • Park Jongsu;Kim Jinsang;Cho Won-Kyung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.11A
    • /
    • pp.1092-1097
    • /
    • 2005
  • In this paper, we propose a low-power Booth multiplication which reduces the switching activities of partial products during multiplication process. Radix-4 Booth algorithm has a characteristic that produces the Booth encoded products with zero when input data have sequentially equal values (0 or 1). Therefore, partial products have higher chances of being zero when an input with a smaller effective dynamic range of two multiplication inputs is used as a multiplier data instead of a multiplicand. The proposed multiplier divides a multiplication expression into several multiplication expressions with smaller bits than those of an original input data, and each multiplication is computed independently for the Booth encoding. Finally, the results of each multiplication are added. This means that the proposed multiplier has a higher chance to have zero encoded products so that we can implement a low power multiplier with the smaller switching activity. Implementation results show the proposed multiplier can save maximally about $20\%$ power dissipation than a previous Booth multiplier.

Extended Information Entropy via Correlation for Autonomous Attribute Reduction of BigData (빅 데이터의 자율 속성 감축을 위한 확장된 정보 엔트로피 기반 상관척도)

  • Park, In-Kyu
    • Journal of Korea Game Society
    • /
    • v.18 no.1
    • /
    • pp.105-114
    • /
    • 2018
  • Various data analysis methods used for customer type analysis are very important for game companies to understand their type and characteristics in an attempt to plan customized content for our customers and to provide more convenient services. In this paper, we propose a k-mode cluster analysis algorithm that uses information uncertainty by extending information entropy to reduce information loss. Therefore, the measurement of the similarity of attributes is considered in two aspects. One is to measure the uncertainty between each attribute on the center of each partition and the other is to measure the uncertainty about the probability distribution of the uncertainty of each property. In particular, the uncertainty in attributes is taken into account in the non-probabilistic and probabilistic scales because the entropy of the attribute is transformed into probabilistic information to measure the uncertainty. The accuracy of the algorithm is observable to the result of cluster analysis based on the optimal initial value through extensive performance analysis and various indexes.