• Title/Summary/Keyword: UCI repository

Search Result 74, Processing Time 0.027 seconds

Hybrid Pattern Recognition Using a Combination of Different Features

  • Choi, Sang-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.11
    • /
    • pp.9-16
    • /
    • 2015
  • We propose a hybrid pattern recognition method that effectively combines two different features for improving data classification. We first extract the PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) features, both of which are widely used in pattern recognition, to construct a set of basic features, and then evaluate the separability of each basic feature. According to the results of evaluation, we select only the basic features that contain a large amount of discriminative information for construction of the combined features. The experimental results for the various data sets in the UCI machine learning repository show that using the proposed combined features give better recognition rates than when solely using the PCA or LDA features.

k-Nearest Neighbor Classifier using Local Values of k (지역적 k값을 사용한 k-Nearest Neighbor Classifier)

  • 이상훈;오경환
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10a
    • /
    • pp.193-195
    • /
    • 2003
  • 본 논문에서는 k-Nearest Neighbor(k-NN) 알고리즘을 최적화하기 위해 지역적으로 다른 k(고려할 neighbor의 개수)를 사용하는 새로운 방법을 제안한다. 인스턴스 공간(instance space)에서 노이즈(noise)의 분포가 지역적(local)으로 다를 경우, 각 지점에서 고려해야 할 최적의 이웃 인스턴스(neighbor)의 수는 해당 지점에서의 국부적인 노이즈 분포에 따라 다르다. 그러나 기존의 방법은 전체 인스턴스 공간에 대해 동일한 k를 사용하기 때문에 이러한 인스턴스 공간의 지역적인 특성을 고려하지 못한다. 따라서 본 논문에서는 지역적으로 분포가 다른 노이즈 문제를 해결하기 위해 인스턴스 공간을 여러 개의 부분으로 나누고, 각 부분에 최적화된 k의 값을 사용하여 kNN을 수행하는 새로운 방법인 Local-k Nearest Neighbor 알고리즘(LkNN Algorithm)을 제안한다. LkNN을 통해 생성된 k의 집합은 인스턴스 공간의 각 부분을 대표하는 값으로, 해당 지역의 인스턴스가 고려해야 할 이웃(neighbor)의 수를 결정지어준다. 제안한 알고리즘에 적합한 데이터의 도메인(domain)과 그것의 향상된 성능은 UCI ML Data Repository 데이터를 사용한 실험을 통해 검증하였다.

  • PDF

Design of One-Class Classifier Using Hyper-Rectangles (Hyper-Rectangles를 이용한 단일 분류기 설계)

  • Jeong, In Kyo;Choi, Jin Young
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.41 no.5
    • /
    • pp.439-446
    • /
    • 2015
  • Recently, the importance of one-class classification problem is more increasing. However, most of existing algorithms have the limitation on providing the information that effects on the prediction of the target value. Motivated by this remark, in this paper, we suggest an efficient one-class classifier using hyper-rectangles (H-RTGLs) that can be produced from intervals including observations. Specifically, we generate intervals for each feature and integrate them. For generating intervals, we consider two approaches : (i) interval merging and (ii) clustering. We evaluate the performance of the suggested methods by computing classification accuracy using area under the roc curve and compare them with other one-class classification algorithms using four datasets from UCI repository. Since H-RTGLs constructed for a given data set enable classification factors to be visible, we can discern which features effect on the classification result and extract patterns that a data set originally has.

사례기반추론 모델의 최근접 이웃 설정을 위한 Similarity Threshold의 사용

  • Lee, Jae-Sik;Lee, Jin-Cheon
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2005.11a
    • /
    • pp.588-594
    • /
    • 2005
  • 사례기반추론(Case-Based Reasoning)은 다양한 예측 문제에 있어서 성공적으로 활용되고 있는 데이터마이닝 기법 중 하나이다. 사례기반추론 시스템의 예측 성능은 예측에 사용되는 최근접이웃(Nearest Neighbor)을 어떻게 설정하느냐에 따라 영향을 받게 된다. 따라서 최근접 이웃을 결정짓는 k 값의 설정은 성공적인 사례기반추론 시스템을 구축하기 위한 중요 요인 중 하나가 된다. 최근접 이웃의 설정에 있어서 대부분의 선행 연구들은 고정된 k 값을 사용하는 사례기반추론 시스템은 k 값을 크게 설정할 경우 최근접 이웃 안에 주어진 오류를 일으킬 수 있으며, k 값이 작게 설정된 경우에는 유사 사례 중 일부만을 예측에 사용하기 때문에 예측 결과의 왜곡을 초래할 수 있다. 본 이웃을 결정함에 있어서 Similarity Threshold를 이용하는 s-NN 방법을 제안하였다. 본 연구의 실험을 위해 UCI(University of california, Irvine) Machine Learning Repository에서 제공하는 두 개의 신용 데이터 셋을 사용하였으며, 실험 결과 s-NN 적용한 CBR 모델이 고정된 k 값을 적용한 전통적인 CBR 모델보다 더 우수한 성능을 보여주었다.

  • PDF

Creation Methods of Fuzzy Membership Functions Based on Statistical Information for Fuzzy Classifier (퍼지 분류기를 위한 통계적 정보 기반의 퍼지 함수 설정 기법)

  • Shin, Sang-Ho;Han, Soowhan;Woo, Young Woon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.379-382
    • /
    • 2009
  • 패턴 인식에서 분류기 모형으로 많이 사용되는 퍼지 분류기는 퍼지 소속 함수를 적절히 설정함으로써 보다 향상된 분류 성능을 얻을 수 있다는 장점이 있다. 그러나 일반적으로 함수 설정은 인식문제 분야의 특성이나 해당 전문가의 지식과 주관적 경험을 기반으로 설정되므로 설정된 소속도 함수의 일관성과 객관성을 보장하기가 어려운 문제점을 갖고 있다. 따라서 이 논문에서는 퍼지 분류기의 소속도 함수를 설정하기 위한 객관적 기준을 제시하기 위하여 특징값들 간의 통계적 정보를 이용한 소속도 함수 설정 기법들을 제안하였다. 제안한 기법들을 이용하여 UCI machine learning repository 사이트에서 제공되는 표준 데이터 중에 Iris 데이터 세트를 이용하여 실험하고 그 결과를 비교, 분석하였다.

  • PDF

Construction of Multiple Classifier Systems based on a Classifiers Pool (인식기 풀 기반의 다수 인식기 시스템 구축방법)

  • Kang, Hee-Joong
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.8
    • /
    • pp.595-603
    • /
    • 2002
  • Only a few studies have been conducted on how to select multiple classifiers from the pool of available classifiers for showing the good classification performance. Thus, the selection problem if classifiers on how to select or how many to select still remains an important research issue. In this paper, provided that the number of selected classifiers is constrained in advance, a variety of selection criteria are proposed and applied to tile construction of multiple classifier systems, and then these selection criteria will be evaluated by the performance of the constructed multiple classifier systems. All the possible sets of classifiers are trammed by the selection criteria, and some of these sets are selected as the candidates of multiple classifier systems. The multiple classifier system candidates were evaluated by the experiments recognizing unconstrained handwritten numerals obtained both from Concordia university and UCI machine learning repository. Among the selection criteria, particularly the multiple classifier system candidates by the information-theoretic selection criteria based on conditional entropy showed more promising results than those by the other selection criteria.

A Sparse Data Preprocessing Using Support Vector Regression (Support Vector Regression을 이용한 희소 데이터의 전처리)

  • Jun, Sung-Hae;Park, Jung-Eun;Oh, Kyung-Whan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.6
    • /
    • pp.789-792
    • /
    • 2004
  • In various fields as web mining, bioinformatics, statistical data analysis, and so forth, very diversely missing values are found. These values make training data to be sparse. Largely, the missing values are replaced by predicted values using mean and mode. We can used the advanced missing value imputation methods as conditional mean, tree method, and Markov Chain Monte Carlo algorithm. But general imputation models have the property that their predictive accuracy is decreased according to increase the ratio of missing in training data. Moreover the number of available imputations is limited by increasing missing ratio. To settle this problem, we proposed statistical learning theory to preprocess for missing values. Our statistical learning theory is the support vector regression by Vapnik. The proposed method can be applied to sparsely training data. We verified the performance of our model using the data sets from UCI machine learning repository.

A Hybrid Feature Selection Method using Univariate Analysis and LVF Algorithm (단변량 분석과 LVF 알고리즘을 결합한 하이브리드 속성선정 방법)

  • Lee, Jae-Sik;Jeong, Mi-Kyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.14 no.4
    • /
    • pp.179-200
    • /
    • 2008
  • We develop a feature selection method that can improve both the efficiency and the effectiveness of classification technique. In this research, we employ case-based reasoning as a classification technique. Basically, this research integrates the two existing feature selection methods, i.e., the univariate analysis and the LVF algorithm. First, we sift some predictive features from the whole set of features using the univariate analysis. Then, we generate all possible subsets of features from these predictive features and measure the inconsistency rate of each subset using the LVF algorithm. Finally, the subset having the lowest inconsistency rate is selected as the best subset of features. We measure the performances of our feature selection method using the data obtained from UCI Machine Learning Repository, and compare them with those of existing methods. The number of selected features and the accuracy of our feature selection method are so satisfactory that the improvements both in efficiency and effectiveness are achieved.

  • PDF

Convergence Characteristics of Ant Colony Optimization with Selective Evaluation in Feature Selection (특징 선택에서 선택적 평가를 사용하는 개미 군집 최적화의 수렴 특성)

  • Lee, Jin-Seon;Oh, Il-Seok
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.10
    • /
    • pp.41-48
    • /
    • 2011
  • In feature selection, the selective evaluation scheme for Ant Colony Optimization(ACO) has recently been proposed, which reduces computational load by excluding unnecessary or less promising candidate solutions from the actual evaluation. Its superiority was supported by experimental results. However the experiment seems to be not statistically sufficient since it used only one dataset. The aim of this paper is to analyze convergence characteristics of the selective evaluation scheme and to make the conclusion more convincing. We chose three datasets related to handwriting, medical, and speech domains from UCI repository whose feature set size ranges from 256 to 617. For each of them, we executed 12 independent runs in order to obtain statistically stable data. Each run was given 72 hours to observe the long-time convergence. Based on analysis of experimental data, we describe a reason for the superiority and where the scheme can be applied.

Naive Bayes Learner for Propositionalized Attribute Taxonomy (명제화된 어트리뷰트 택소노미를 이용하는 나이브 베이스 학습 알고리즘)

  • Kang, Dae-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.406-409
    • /
    • 2008
  • We consider the problem of exploiting a taxonomy of propositionalized attributes in order to learn compact and robust classifiers. We introduce Propositionalized Attribute Taxonomy guided Naive Bayes Learner (PAT-NBL), an inductive learning algorithm that exploits a taxonomy of propositionalized attributes as prior knowledge to generate compact and accurate classifiers. PAT-NBL uses top-down and bottom-up search to find a locally optimal cut that corresponds to the instance space from propositionalized attribute taxonomy and data. Our experimental results on University of California-Irvine (UCI) repository data sets show that the proposed algorithm can generate a classifier that is sometimes comparably compact and accurate to those produced by standard Naive Bayes learners.

  • PDF