• 제목/요약/키워드: Class Imbalanced Data

검색결과 60건 처리시간 0.023초

A Statistical Perspective of Neural Networks for Imbalanced Data Problems

  • Oh, Sang-Hoon
    • International Journal of Contents
    • /
    • 제7권3호
    • /
    • pp.1-5
    • /
    • 2011
  • It has been an interesting challenge to find a good classifier for imbalanced data, since it is pervasive but a difficult problem to solve. However, classifiers developed with the assumption of well-balanced class distributions show poor classification performance for the imbalanced data. Among many approaches to the imbalanced data problems, the algorithmic level approach is attractive because it can be applied to the other approaches such as data level or ensemble approaches. Especially, the error back-propagation algorithm using the target node method, which can change the amount of weight-updating with regards to the target node of each class, attains good performances in the imbalanced data problems. In this paper, we analyze the relationship between two optimal outputs of neural network classifier trained with the target node method. Also, the optimal relationship is compared with those of the other error function methods such as mean-squared error and the n-th order extension of cross-entropy error. The analyses are verified through simulations on a thyroid data set.

MCSVM을 이용한 반도체 공정데이터의 과소 추출 기법 (Under Sampling for Imbalanced Data using Minor Class based SVM (MCSVM) in Semiconductor Process)

  • 박새롬;김준석;박정술;박승환;백준걸
    • 대한산업공학회지
    • /
    • 제40권4호
    • /
    • pp.404-414
    • /
    • 2014
  • Yield prediction is important to manage semiconductor quality. Many researches with machine learning algorithms such as SVM (support vector machine) are conducted to predict yield precisely. However, yield prediction using SVM is hard because extremely imbalanced and big data are generated by final test procedure in semiconductor manufacturing process. Using SVM algorithm with imbalanced data sometimes cause unnecessary support vectors from major class because of unselected support vectors from minor class. So, decision boundary at target class can be overwhelmed by effect of observations in major class. For this reason, we propose a under-sampling method with minor class based SVM (MCSVM) which overcomes the limitations of ordinary SVM algorithm. MCSVM constructs the model that fixes some of data from minor class as support vectors, and they can be good samples representing the nature of target class. Several experimental studies with using the data sets from UCI and real manufacturing process represent that our proposed method performs better than existing sampling methods.

가우시안 기반 Hyper-Rectangle 생성을 이용한 효율적 단일 분류기 (An Efficient One Class Classifier Using Gaussian-based Hyper-Rectangle Generation)

  • 김도균;최진영;고정한
    • 산업경영시스템학회지
    • /
    • 제41권2호
    • /
    • pp.56-64
    • /
    • 2018
  • In recent years, imbalanced data is one of the most important and frequent issue for quality control in industrial field. As an example, defect rate has been drastically reduced thanks to highly developed technology and quality management, so that only few defective data can be obtained from production process. Therefore, quality classification should be performed under the condition that one class (defective dataset) is even smaller than the other class (good dataset). However, traditional multi-class classification methods are not appropriate to deal with such an imbalanced dataset, since they classify data from the difference between one class and the others that can hardly be found in imbalanced datasets. Thus, one-class classification that thoroughly learns patterns of target class is more suitable for imbalanced dataset since it only focuses on data in a target class. So far, several one-class classification methods such as one-class support vector machine, neural network and decision tree there have been suggested. One-class support vector machine and neural network can guarantee good classification rate, and decision tree can provide a set of rules that can be clearly interpreted. However, the classifiers obtained from the former two methods consist of complex mathematical functions and cannot be easily understood by users. In case of decision tree, the criterion for rule generation is ambiguous. Therefore, as an alternative, a new one-class classifier using hyper-rectangles was proposed, which performs precise classification compared to other methods and generates rules clearly understood by users as well. In this paper, we suggest an approach for improving the limitations of those previous one-class classification algorithms. Specifically, the suggested approach produces more improved one-class classifier using hyper-rectangles generated by using Gaussian function. The performance of the suggested algorithm is verified by a numerical experiment, which uses several datasets in UCI machine learning repository.

Experimental Analysis of Equilibrization in Binary Classification for Non-Image Imbalanced Data Using Wasserstein GAN

  • Wang, Zhi-Yong;Kang, Dae-Ki
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제11권4호
    • /
    • pp.37-42
    • /
    • 2019
  • In this paper, we explore the details of three classic data augmentation methods and two generative model based oversampling methods. The three classic data augmentation methods are random sampling (RANDOM), Synthetic Minority Over-sampling Technique (SMOTE), and Adaptive Synthetic Sampling (ADASYN). The two generative model based oversampling methods are Conditional Generative Adversarial Network (CGAN) and Wasserstein Generative Adversarial Network (WGAN). In imbalanced data, the whole instances are divided into majority class and minority class, where majority class occupies most of the instances in the training set and minority class only includes a few instances. Generative models have their own advantages when they are used to generate more plausible samples referring to the distribution of the minority class. We also adopt CGAN to compare the data augmentation performance with other methods. The experimental results show that WGAN-based oversampling technique is more stable than other approaches (RANDOM, SMOTE, ADASYN and CGAN) even with the very limited training datasets. However, when the imbalanced ratio is too small, generative model based approaches cannot achieve satisfying performance than the conventional data augmentation techniques. These results suggest us one of future research directions.

Improving the Error Back-Propagation Algorithm for Imbalanced Data Sets

  • Oh, Sang-Hoon
    • International Journal of Contents
    • /
    • 제8권2호
    • /
    • pp.7-12
    • /
    • 2012
  • Imbalanced data sets are difficult to be classified since most classifiers are developed based on the assumption that class distributions are well-balanced. In order to improve the error back-propagation algorithm for the classification of imbalanced data sets, a new error function is proposed. The error function controls weight-updating with regards to the classes in which the training samples are. This has the effect that samples in the minority class have a greater chance to be classified but samples in the majority class have a less chance to be classified. The proposed method is compared with the two-phase, threshold-moving, and target node methods through simulations in a mammography data set and the proposed method attains the best results.

데이터 전처리와 앙상블 기법을 통한 불균형 데이터의 분류모형 비교 연구 (A Comparison of Ensemble Methods Combining Resampling Techniques for Class Imbalanced Data)

  • 이희재;이성임
    • 응용통계연구
    • /
    • 제27권3호
    • /
    • pp.357-371
    • /
    • 2014
  • 최근 들어 데이터 마이닝의 분류문제에 있어 목표변수의 불균형 문제가 많은 관심을 받고 있다. 이러한 문제를 해결하기 위해, 이전 연구들은 원 자료에 대하여 데이터 전처리 과정을 실시했는데, 전처리 과정에는 목표변수의 다수계급을 소수계급의 비율에 맞게 조정하는 과소표집법, 소수계급을 복원추출하여 다수계급의 비율에 맞게 조정하는 과대표집법, 소수계급에 K-최근접 이웃 방법 등을 활용하여 과대표집법을 적용 후 다수계급에는 과소표집법을 적용한 하이브리드 기법 등이 있다. 또한 앙상블 기법도 이러한 불균형 데이터의 분류 성능을 높일 수 있다고 알려져 있어, 본 논문에서는 데이터의 전처리 과정과 앙상블 기법을 함께 고려한 여러 모형들을 사용하여, 불균형 자료에 대한 이들모형의 분류성능을 비교평가한다.

계급불균형자료의 분류: 훈련표본 구성방법에 따른 효과 (Classification of Class-Imbalanced Data: Effect of Over-sampling and Under-sampling of Training Data)

  • 김지현;정종빈
    • 응용통계연구
    • /
    • 제17권3호
    • /
    • pp.445-457
    • /
    • 2004
  • 두 계급의 분류문제에서 두 계급의 관측 개체수가 심하게 불균형을 이룬 자료를 분석할 때, 흔히 인위적으로 두 계급의 크기를 비슷하게 해준 다음 분석한다. 본 연구에서는 이런 훈련표본 구성방법의 타당성에 대해 알아보았다. 또한 훈련표본의 구성방법이 부스팅에 미치는 효과에 대해서도 알아보았다. 12개의 실제 자료에 대한 실험 결과 나무모형으로 부스팅 기법을 적용할 때는 훈련표본을 그대로 둔 채 분석하는 것이 좋다는 결론을 얻었다.

불균형 데이터 분류를 위한 딥러닝 기반 오버샘플링 기법 (A Deep Learning Based Over-Sampling Scheme for Imbalanced Data Classification)

  • 손민재;정승원;황인준
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제8권7호
    • /
    • pp.311-316
    • /
    • 2019
  • 분류 문제는 주어진 입력 데이터에 대해 해당 데이터의 클래스를 예측하는 문제로, 자주 쓰이는 방법 중의 하나는 주어진 데이터셋을 사용하여 기계학습 알고리즘을 학습시키는 것이다. 이런 경우 분류하고자 하는 클래스에 따른 데이터의 분포가 균일한 데이터셋이 이상적이지만, 불균형한 분포를 가지고 경우 제대로 분류하지 못하는 문제가 발생한다. 이러한 문제를 해결하기 위해 본 논문에서는 Conditional Generative Adversarial Networks(CGAN)을 활용하여 데이터 수의 균형을 맞추는 오버샘플링 기법을 제안한다. CGAN은 Generative Adversarial Networks(GAN)에서 파생된 생성 모델로, 데이터의 특징을 학습하여 실제 데이터와 유사한 데이터를 생성할 수 있다. 따라서 CGAN이 데이터 수가 적은 클래스의 데이터를 학습하고 생성함으로써 불균형한 클래스 비율을 맞추어 줄 수 있으며, 그에 따라 분류 성능을 높일 수 있다. 실제 수집된 데이터를 이용한 실험을 통해 CGAN을 활용한 오버샘플링 기법이 효과가 있음을 보이고 기존 오버샘플링 기법들과 비교하여 기존 기법들보다 우수함을 입증하였다.

불균형의 대용량 범주형 자료에 대한 분할-과대추출 정복 서포트 벡터 머신 (A divide-oversampling and conquer algorithm based support vector machine for massive and highly imbalanced data)

  • 방성완;김재오
    • 응용통계연구
    • /
    • 제35권2호
    • /
    • pp.177-188
    • /
    • 2022
  • 일반적으로 support vector machine (SVM)은 높은 수준의 분류 정확도를 제공함으로써 다양한 분야의 분류분석에서 널리 사용되고 있다. 그러나 SVM은 최적화 계산식이 이차계획법(quadratic programming)으로 공식화되어 많은 계산 비용이 필요하므로 대용량 자료의 분류분석에는 그 사용이 제한된다. 또한 불균형 자료(imbalanced data)의 분류분석에서는 다수집단에 편향된 분류함수를 추정함으로써 대부분의 자료를 다수집단으로 분류하여 소수집단의 분류 정확도를 현저히 감소시키게 된다. 이러한 문제점들을 해결하기 위하여 본 논문에서는 다수집단을 분할(divide)하고, 소수집단을 과대추출(oversampling)하여 여러 분류함수들을 추정하고 이들을 통합(conquer)하는 DOC-SVM 분류기법을 제안한다. 제안한 DOC-SVM은 분할정복 알고리즘을 다수집단에 적용하여 SVM의 계산 효율을 향상시키고, 과대추출 알고리즘을 소수집단에 적용하여 SVM 분류함수의 편향을 줄이게 된다. 본 논문에서는 모의실험과 실제자료 분석을 통해 제안한 DOC-SVM의 효율적인 성능과 활용 가능성을 확인하였다.

Severity-based Software Quality Prediction using Class Imbalanced Data

  • Hong, Euy-Seok;Park, Mi-Kyeong
    • 한국컴퓨터정보학회논문지
    • /
    • 제21권4호
    • /
    • pp.73-80
    • /
    • 2016
  • Most fault prediction models have class imbalance problems because training data usually contains much more non-fault class modules than fault class ones. This imbalanced distribution makes it difficult for the models to learn the minor class module data. Data imbalance is much higher when severity-based fault prediction is used. This is because high severity fault modules is a smaller subset of the fault modules. In this paper, we propose severity-based models to solve these problems using the three sampling methods, Resample, SpreadSubSample and SMOTE. Empirical results show that Resample method has typical over-fit problems, and SpreadSubSample method cannot enhance the prediction performance of the models. Unlike two methods, SMOTE method shows good performance in terms of AUC and FNR values. Especially J48 decision tree model using SMOTE outperforms other prediction models.