• Title/Summary/Keyword: 데이터 이산화

Search Result 151, Processing Time 0.031 seconds

A Comparative Study on Discretization Algorithms for Data Mining (데이터 마이닝을 위한 이산화 알고리즘에 대한 비교 연구)

  • Choi, Byong-Su;Kim, Hyun-Ji;Cha, Woon-Ock
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.1
    • /
    • pp.89-102
    • /
    • 2011
  • The discretization process that converts continuous attributes into discrete ones is a preprocessing step in data mining such as classification. Some classification algorithms can handle only discrete attributes. The purpose of discretization is to obtain discretized data without losing the information for the original data and to obtain a high predictive accuracy when discretized data are used in classification. Many discretization algorithms have been developed. This paper presents the results of our comparative study on recently proposed representative discretization algorithms from the view point of splitting versus merging and supervised versus unsupervised. We implemented R codes for discretization algorithms and made them available for public users.

Discretization of continuous-valued attributes considering data distribution (데이터 분포를 고려한 연속 값 속성의 이산화)

  • 이상훈;박정은;오경환
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.05a
    • /
    • pp.217-220
    • /
    • 2003
  • 본 논문에서는 특정 매개변수의 입력 없이 속성(attribute)에 따른 목적속성(class)값의 분포를 고려하여 연속형(conti-nuous) 값을 범주형(categorical)의 형태로 변환시키는 새로운 방법을 제안하였다. 각각의 속성에 대해 목적속성의 분포를 1차원 공간에 사상(mapping)하고, 각 목적속성의 밀도, 다른 목적속성과의 중복 정도 등의 기준에 따라 구간을 군집화 한다. 이렇게 생성된 군집들은 각각 목적속성을 예측할 수 있는 확률적 수치에 기반한 것으로, 각 속성이 제공하는 정보의 손실을 최소화하는 이산화 경계선을 갖고 있다. 제안된 데이터 이산화 방법의 향상된 성능은 C4.5 알고리즘과 UCI Machine Learning Data Repository 데이터를 사용하여 확인할 수 있다.

  • PDF

Discretization Method for Continuous Data using Wasserstein Distance (Wasserstein 거리를 이용한 연속형 변수 이산화 기법)

  • Ha, Sang-won;Kim, Han-joon
    • Database Research
    • /
    • v.34 no.3
    • /
    • pp.159-169
    • /
    • 2018
  • Discretization of continuous variables intended to improve the performance of various algorithms such as data mining by transforming quantitative variables into qualitative variables. If we use appropriate discretization techniques for data, we can expect not only better performance of classification algorithms, but also accurate and concise interpretation of results and speed improvements. Various discretization techniques have been studied up to now, and however there is still demand of research on discretization studies. In this paper, we propose a new discretization technique to set the cut-point using Wasserstein distance with considering the distribution of continuous variable values with classes of data. We show the superiority of the proposed method through the performance comparison between the proposed method and the existing proven methods.

Discretizing Spatio-Temporal Data using Data Reduction and Clustering (데이타 축소와 군집화를 사용하는 시공간 데이타의 이산화 기법)

  • Kang, Ju-Young;Yong, Hwan-Seung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.1
    • /
    • pp.57-61
    • /
    • 2009
  • To increase the efficiency of mining process and derive accurate spatio-temporal patterns, continuous values of attributes should be discretized prior to mining process. In this paper, we propose a discretization method which improves the mining efficiency by reducing the data size without losing the correlations in the data. The proposed method first s original trajectories into approximations using line simplification and then groups them into similar clusters. Our experiments show that the proposed approach improves the mining efficiency as well as extracts more intuitive patterns compared to existing discretization methods.

Spatiotemporal Data Visualization using Gravity Model (중력 모델을 이용한 시공간 데이터의 시각화)

  • Kim, Seokyeon;Yeon, Hanbyul;Jang, Yun
    • Journal of KIISE
    • /
    • v.43 no.2
    • /
    • pp.135-142
    • /
    • 2016
  • Visual analysis of spatiotemporal data has focused on a variety of techniques for analyzing and exploring the data. The goal of these techniques is to explore the spatiotemporal data using time information, discover patterns in the data, and analyze spatiotemporal data. The overall trend flow patterns help users analyze geo-referenced temporal events. However, it is difficult to extract and visualize overall trend flow patterns using data that has no trajectory information for movements. In order to visualize overall trend flow patterns, in this paper, we estimate continuous distributions of discrete events over time using KDE, and we extract vector fields from the continuous distributions using the gravity model. We then apply our technique on twitter data to validate techniques.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

R명령어들의 속도 평가

  • Lee, Jin-A;Heo, Mun-Yeol
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.10a
    • /
    • pp.301-305
    • /
    • 2003
  • 최근에 R은 여러 분야에서 많이 사용되고 있다. 특히 모의실험(simulation)이나 통계학 관련 연구에 많이 사용되고 있다. 모의실험을 하는 경우에는 많은 반복으로 인해 R 프로그램의 수행 속도가 매우 중요하다. 또한 데이터마이닝 분야에서도 R을 많이 사용하고 있다. 우리는 데이터 마이닝에서 데이터의 전처리 과정 중 Fayyad & Irani 방법을 사용하여 연속형 변수를 이산화하는 실험을 하였으며, 이를 위해 R을 사용하였다. 이 프로그램은 재귀 함수를 이용하고 이런 과정에서 빈도표 작성, information계산, 빈도표의 분할, 정지 규칙 등의 여러 함수를 사용하게 되어있다. 우리가 작성한 R 로드를 사용하여 UCI DB의 Iono 자료를 (속성이 35개, 사례수가 약 1000개정도) 이산화 하였을 때 7초 이상의 상당한 시간이 소요된다. 반면에 JAVA로 만들어진 Weka에서 똑같은 Fayyad & Irani 방법을 수행했을 때 위와 같은 큰 자료를 이산화하는 속도가 매우 빨라 수행시간은 거의 무시할 만하였다. 이런 차이점을 보고 R 프로그램의 수행 속도를 늘이는 방법을 찾게 되었다. 이 본 발표에서는 R 코드 중 시간이 많이 소요되는 것들을 몇 가지 선정하고 이들을 더 효율적으로 만들 수 있는 코드를 작성하여 이들 코드의 수행속도를 비교하였다. 또한 몇 가지 명령에 대해서는SAS와도 비교하였다.

  • PDF

Evolutionary Hypernetwork Model for Higher Order Pattern Recognition on Real-valued Feature Data without Discretization (이산화 과정을 배제한 실수 값 인자 데이터의 고차 패턴 분석을 위한 진화연산 기반 하이퍼네트워크 모델)

  • Ha, Jung-Woo;Zhang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.2
    • /
    • pp.120-128
    • /
    • 2010
  • A hypernetwork is a generalized hypo-graph and a probabilistic graphical model based on evolutionary learning. Hypernetwork models have been applied to various domains including pattern recognition and bioinformatics. Nevertheless, conventional hypernetwork models have the limitation that they can manage data with categorical or discrete attibutes only since the learning method of hypernetworks is based on equality comparison of hyperedges with learned data. Therefore, real-valued data need to be discretized by preprocessing before learning with hypernetworks. However, discretization causes inevitable information loss and possible decrease of accuracy in pattern classification. To overcome this weakness, we propose a novel feature-wise L1-distance based method for real-valued attributes in learning hypernetwork models in this study. We show that the proposed model improves the classification accuracy compared with conventional hypernetworks and it shows competitive performance over other machine learning methods.

Discretization of Continuous-Valued Attributes considering Data Distribution (데이터 분포를 고려한 연속 값 속성의 이산화)

  • Lee, Sang-Hoon;Park, Jung-Eun;Oh, Kyung-Whan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.4
    • /
    • pp.391-396
    • /
    • 2003
  • This paper proposes a new approach that converts continuous-valued attributes to categorical-valued ones considering the distribution of target attributes(classes). In this approach, It can be possible to get optimal interval boundaries by considering the distribution of data itself without any requirements of parameters. For each attributes, the distribution of target attributes is projected to one-dimensional space. And this space is clustered according to the criteria like as the density value of each target attributes and the amount of overlapped areas among each density values of target attributes. Clusters which are made in this ways are based on the probabilities that can predict a target attribute of instances. Therefore it has an interval boundaries that minimize a loss of information of original data. An improved performance of proposed discretization method can be validated using C4.5 algorithm and UCI Machine Learning Data Repository data sets.