• Title/Summary/Keyword: 오분류 비용

Search Result 36, Processing Time 0.043 seconds

Alternative Optimal Threshold Criteria: MFR (대안적인 분류기준: 오분류율곱)

  • Hong, Chong Sun;Kim, Hyomin Alex;Kim, Dong Kyu
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.5
    • /
    • pp.773-786
    • /
    • 2014
  • We propose the multiplication of false rates (MFR) which is a classification accuracy criteria and an area type of rectangle from ROC curve. Optimal threshold obtained using MFR is compared with other criteria in terms of classification performance. Their optimal thresholds for various distribution functions are also found; consequently, some properties and advantages of MFR are discussed by comparing FNR and FPR corresponding to optimal thresholds. Based on general cost function, cost ratios of optimal thresholds are computed using various classification criteria. The cost ratios for cost curves are observed so that the advantages of MFR are explored. Furthermore, the de nition of MFR is extended to multi-dimensional ROC analysis and the relations of classification criteria are also discussed.

Cost-sensitive Learning for Credit Card Fraud Detection (신용카드 사기 검출을 위한 비용 기반 학습에 관한 연구)

  • Park Lae-Jeong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.545-551
    • /
    • 2005
  • The main objective of fraud detection is to minimize costs or losses that are incurred due to fraudulent transactions. Because of the problem's nature such as highly skewed, overlapping class distribution and non-uniform misclassification costs, it is, however, practically difficult to generate a classifier that is near-optimal in terms of classification costs at a desired operating range of rejection rates. This paper defines a performance measure that reflects classifier's costs at a specific operating range and offers a cost-sensitive learning approach that enables us to train classifiers suitable for real-world credit card fraud detection by directly optimizing the performance measure with evolutionary programming. The experimental results demonstrate that the proposed approach provides an effective way of training cost-sensitive classifiers for successful fraud detection, compared to other training methods.

Empirical Bayesian Misclassification Analysis on Categorical Data (범주형 자료에서 경험적 베이지안 오분류 분석)

  • 임한승;홍종선;서문섭
    • The Korean Journal of Applied Statistics
    • /
    • v.14 no.1
    • /
    • pp.39-57
    • /
    • 2001
  • Categorical data has sometimes misclassification errors. If this data will be analyzed, then estimated cell probabilities could be biased and the standard Pearson X2 tests may have inflated true type I error rates. On the other hand, if we regard wellclassified data with misclassified one, then we might spend lots of cost and time on adjustment of misclassification. It is a necessary and important step to ask whether categorical data is misclassified before analyzing data. In this paper, when data is misclassified at one of two variables for two-dimensional contingency table and marginal sums of a well-classified variable are fixed. We explore to partition marginal sums into each cells via the concepts of Bound and Collapse of Sebastiani and Ramoni (1997). The double sampling scheme (Tenenbein 1970) is used to obtain informations of misclassification. We propose test statistics in order to solve misclassification problems and examine behaviors of the statistics by simulation studies.

  • PDF

A Cost Effective Reference Data Sampling Algorithm Using Fractal Analysis (프랙탈 분석을 통한 비용효과적인 기준자료추출 알고리즘에 관한 연구)

  • 김창재;이병길;김용일
    • Proceedings of the KSRS Conference
    • /
    • 2000.04a
    • /
    • pp.149-154
    • /
    • 2000
  • 분류기법을 통해 얻어진 원격탐사 자료는 사용되기 이전에 그 정확성에 관한 신뢰도 검증을 해야 한다. 분류 정확도를 평가하기 위해서는 오분류행렬(confusion matrix)을 사용하여 정확도 평가를 하게 되는데, 이때 오분류행렬을 구성하기 위해서는 기준자료(reference data)에 대한 표본추출이 이루어져야 한다. 기준자료의 표본을 추출하는 기법간의 비교 및 표본 크기를 줄이고자 하는 연구는 많이 이루어져 왔으난, 추출된 표본들간의 거리를 줄임으로써 정확도 평가 비용을 감소시키고자 하는 연구는 미미한 실정이다. 따라서, 본 연구에서는 프랙탈 분석을 통하여 기준자료의 표본을 추출하였으며, 이를 바탕으로 기존의 표본추출 기법과 정확도 차이 및 비용효과 측면을 비교 분석하였다. 연구 결과, 프랙탈 분석을 통하여 표본을 추출하는 기법은 그 정확도 추정에 있어 기존적 표본 추출 기법과 큰 차이가 보이지 않았으며, 추출된 화소들이 가까운 거리에 군집해 있어 비용효과측면에서 보다 유리함을 확인하였다.

  • PDF

Weighted L1-Norm Support Vector Machine for the Classification of Highly Imbalanced Data (불균형 자료의 분류분석을 위한 가중 L1-norm SVM)

  • Kim, Eunkyung;Jhun, Myoungshic;Bang, Sungwan
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.1
    • /
    • pp.9-21
    • /
    • 2015
  • The support vector machine has been successfully applied to various classification areas due to its flexibility and a high level of classification accuracy. However, when analyzing imbalanced data with uneven class sizes, the classification accuracy of SVM may drop significantly in predicting minority class because the SVM classifiers are undesirably biased toward the majority class. The weighted $L_2$-norm SVM was developed for the analysis of imbalanced data; however, it cannot identify irrelevant input variables due to the characteristics of the ridge penalty. Therefore, we propose the weighted $L_1$-norm SVM, which uses lasso penalty to select important input variables and weights to differentiate the misclassification of data points between classes. We demonstrate the satisfactory performance of the proposed method through simulation studies and a real data analysis.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.

Aggregating Prediction Outputs of Multiple Classification Techniques Using Mixed Integer Programming (다수의 분류 기법의 예측 결과를 결합하기 위한 혼합 정수 계획법의 사용)

  • Jo, Hongkyu;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.9 no.1
    • /
    • pp.71-89
    • /
    • 2003
  • Although many studies demonstrate that one technique outperforms the others for a given data set, there is often no way to tell a priori which of these techniques will be most effective in the classification problems. Alternatively, it has been suggested that a better approach to classification problem might be to integrate several different forecasting techniques. This study proposes the linearly combining methodology of different classification techniques. The methodology is developed to find the optimal combining weight and compute the weighted-average of different techniques' outputs. The proposed methodology is represented as the form of mixed integer programming. The objective function of proposed combining methodology is to minimize total misclassification cost which is the weighted-sum of two types of misclassification. To simplify the problem solving process, cutoff value is fixed and threshold function is removed. The form of mixed integer programming is solved with the branch and bound methods. The result showed that proposed methodology classified more accurately than any of techniques individually did. It is confirmed that Proposed methodology Predicts significantly better than individual techniques and the other combining methods.

  • PDF

Novelty Detection Methods for Response Modeling (반응 모델링을 위한 이상탐지 기법)

  • Lee Hyeong-Ju;Jo Seong-Jun
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2006.05a
    • /
    • pp.1825-1831
    • /
    • 2006
  • 본 논문에서는 반응 모델링에서의 집단 불균형을 해소하기 위한 이상탐지 기법의 활용을 제안한다. DMEF4 데이터셋의 카탈로그 발송 작업에 대하여 두 가지의 이상탐지 기법, one-class support vector machine (1-SVM)과 learning vector quantization for novelty detection (LVQ-ND)을 적용하여 이진분류기법들과 비교한다. 반응률이 낮은 경우에는 이상 탐지 기법들이 더 높은 정확도를 보인 반면, 반응률이 상대적으로 높은 경우에는 오분류 비용을 조정한 SVM 기법이 가장 좋은 성능을 보였다. 또한, 이상탐지 기법들은 발송비용이 낮은 경우에 높은 이익을 달성하였고, 발송비용이 높은 경우에는 SVM 모델이 가장 높은 이익을 달성하였다.

  • PDF

A Cost Effective Reference Data Sampling Algorithm Using Fractal Analysis (프랙탈 분석을 통한 비용효과적인 기준 자료추출알고리즘에 관한 연구)

  • 김창재
    • Spatial Information Research
    • /
    • v.8 no.1
    • /
    • pp.171-182
    • /
    • 2000
  • Random sampling or systematic sampling method is commonly used to assess the accuracy of classification results. In remote sensing, with these sampling method, much time and tedious works are required to acquire sufficient ground truth data. So , a more effective sampling method that can retain the characteristics of the population is required. In this study, fractal analysis is adopted as an index for reference sampling . The fractal dimensions of the whole study area and the sub-regions are calculated to choose sub-regions that have the most similar dimensionality to that of whole-area. Then the whole -area s classification accuracy is compared to those of sub-regions, respectively, and it is verified that the accuracies of selected sub regions are similar to that of full-area . Using the above procedure, a new kind of reference sampling method is proposed. The result shows that it is possible to reduced sampling area and sample size keeping up the same results as existing methods in accuracy tests. Thus, the proposed method is proved cost-effective for reference data sampling.

  • PDF

Consumer Credit Scoring Model with Two-Stage Mathematical Programming (통합 수리계획법을 이용한 개인신용평가모형)

  • Lee, Sung-Wook;Roh, Tae-Hyup
    • The Journal of Information Systems
    • /
    • v.16 no.1
    • /
    • pp.1-21
    • /
    • 2007
  • 신용평점을 위한 부도예측의 분류 문제를 다루는데 있어서 통계적 판별분석 및 인공신경망 및 유전자알고리즘 등을 이용한 데이터 마이닝의 방법들이 일반적으로 고려되어왔다. 이 연구에서는 수리계획법을 응용하여 classification gap을 고려한 이단계 수리계획 접근방법을 신용평가에 적용하는 방법론을 제안하여 수리계획법을 통한 신용평가모형 구축의 가능성을 제시한다. 1단계에서는 선형계획법을 이용해서 대출 신청자에게 대출을 허가할 것 인지의 여부를 결정하게 되는 대출 심사 filtering으로의 적용단계이고, 2단계에서는 정수계획법을 이용하여 오분류 비용이 최소가 되도록 하는 판별점수를 찾는 과정으로 모형을 구성한다. 개인 대출 신청자의 데이터(German Credit Data)에 대하여 피셔의 선형 판별함수, 로지스틱 회귀모형 및 기존의 수리계획 기법들과의 비교를 통해서 제안된 모델의 성능을 평가한다. 이단계 수리계획 접근법의 평가 결과를 통하여 신용평가모형에의 적용가능성을 기존 통계적인 접근방법 및 수리계획 접근법과 비교하여 제시하고 있다.

  • PDF