• Title/Summary/Keyword: cost-sensitive learning

Search Result 26, Processing Time 0.036 seconds

A Study on the Improvement of Image Classification Performance in the Defense Field through Cost-Sensitive Learning of Imbalanced Data (불균형데이터의 비용민감학습을 통한 국방분야 이미지 분류 성능 향상에 관한 연구)

  • Jeong, Miae;Ma, Jungmok
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.24 no.3
    • /
    • pp.281-292
    • /
    • 2021
  • With the development of deep learning technology, researchers and technicians keep attempting to apply deep learning in various industrial and academic fields, including the defense. Most of these attempts assume that the data are balanced. In reality, since lots of the data are imbalanced, the classifier is not properly built and the model's performance can be low. Therefore, this study proposes cost-sensitive learning as a solution to the imbalance data problem of image classification in the defense field. In the proposed model, cost-sensitive learning is a method of giving a high weight on the cost function of a minority class. The results of cost-sensitive based model shows the test F1-score is higher when cost-sensitive learning is applied than general learning's through 160 experiments using submarine/non-submarine dataset and warship/non-warship dataset. Furthermore, statistical tests are conducted and the results are shown significantly.

Cost-sensitive Learning for Credit Card Fraud Detection (신용카드 사기 검출을 위한 비용 기반 학습에 관한 연구)

  • Park Lae-Jeong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.545-551
    • /
    • 2005
  • The main objective of fraud detection is to minimize costs or losses that are incurred due to fraudulent transactions. Because of the problem's nature such as highly skewed, overlapping class distribution and non-uniform misclassification costs, it is, however, practically difficult to generate a classifier that is near-optimal in terms of classification costs at a desired operating range of rejection rates. This paper defines a performance measure that reflects classifier's costs at a specific operating range and offers a cost-sensitive learning approach that enables us to train classifiers suitable for real-world credit card fraud detection by directly optimizing the performance measure with evolutionary programming. The experimental results demonstrate that the proposed approach provides an effective way of training cost-sensitive classifiers for successful fraud detection, compared to other training methods.

Cost-Sensitive Learning for Cardio-Cerebrovascular Disease Risk Prediction (심혈관질환 위험 예측을 위한 비용민감 학습 모델)

  • Yu Na Lee;Kyung-Hee Lee;Wan-Sup Cho
    • The Journal of Bigdata
    • /
    • v.6 no.2
    • /
    • pp.161-168
    • /
    • 2021
  • In this study, we propose a cardiovascular disease prediction model using machine learning. First, a multidimensional analysis of various differences between the two groups is performed and the results are visualized. In particular, we propose a predictive model using cost-sensitive learning that can improve the sensitivity for cases where there is a high class imbalance between the normal and patient groups, such as diseases. In this study, a predictive model is developed using CART and XGBoost, which are representative machine learning technologies, and prediction and performance are compared for cardiovascular disease patient data. According to the study results, CART showed higher accuracy and specificity than XGBoost, and the accuracy was about 70% to 74%.

ROC and Cost Graphs for General Cost Matrix Where Correct Classifications Incur Non-zero Costs

  • Kim, Ji-Hyun
    • Communications for Statistical Applications and Methods
    • /
    • v.11 no.1
    • /
    • pp.21-30
    • /
    • 2004
  • Often the accuracy is not adequate as a performance measure of classifiers when costs are different for different prediction errors. ROC and cost graphs can be used in such case to compare and identify cost-sensitive classifiers. We extend ROC and cost graphs so that they can be used when more general cost matrix is given, where not only misclassifications but correct classifications also incur penalties.

A Cost Sensitive Part-of-Speech Tagging: Differentiating Serious Errors from Minor Errors

  • Son, Jeong-Woo;Noh, Tae-Gil;Park, Seong-Bae
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.1
    • /
    • pp.6-14
    • /
    • 2012
  • All types of part-of-speech (POS) tagging errors have been equally treated by existing taggers. However, the errors are not equally important, since some errors affect the performance of subsequent natural language processing seriously while others do not. This paper aims to minimize these serious errors while retaining the overall performance of POS tagging. Two gradient loss functions are proposed to reflect the different types of errors. They are designed to assign a larger cost for serious errors and a smaller cost for minor errors. Through a series of experiments, it is shown that the classifier trained with the proposed loss functions not only reduces serious errors but also achieves slightly higher accuracy than ordinary classifiers.

Prediction of Diabetic Nephropathy from Diabetes Dataset Using Feature Selection Methods and SVM Learning (특징점 선택방법과 SVM 학습법을 이용한 당뇨병 데이터에서의 당뇨병성 신장합병증의 예측)

  • Cho, Baek-Hwan;Lee, Jong-Shill;Chee, Young-Joan;Kim, Kwang-Won;Kim, In-Young;Kim, Sun-I.
    • Journal of Biomedical Engineering Research
    • /
    • v.28 no.3
    • /
    • pp.355-362
    • /
    • 2007
  • Diabetes mellitus can cause devastating complications, which often result in disability and death, and diabetic nephropathy is a leading cause of death in people with diabetes. In this study, we tried to predict the onset of diabetic nephropathy from an irregular and unbalanced diabetic dataset. We collected clinical data from 292 patients with type 2 diabetes and performed preprocessing to extract 184 features to resolve the irregularity of the dataset. We compared several feature selection methods, such as ReliefF and sensitivity analysis, to remove redundant features and improve the classification performance. We also compared learning methods with support vector machine, such as equal cost learning and cost-sensitive learning to tackle the unbalanced problem in the dataset. The best classifier with the 39 selected features gave 0.969 of the area under the curve by receiver operation characteristics analysis, which represents that our method can predict diabetic nephropathy with high generalization performance from an irregular and unbalanced dataset, and physicians can benefit from it for predicting diabetic nephropathy.

Application of cost-sensitive LSTM in water level prediction for nuclear reactor pressurizer

  • Zhang, Jin;Wang, Xiaolong;Zhao, Cheng;Bai, Wei;Shen, Jun;Li, Yang;Pan, Zhisong;Duan, Yexin
    • Nuclear Engineering and Technology
    • /
    • v.52 no.7
    • /
    • pp.1429-1435
    • /
    • 2020
  • Applying an accurate parametric prediction model to identify abnormal or false pressurizer water levels (PWLs) is critical to the safe operation of marine pressurized water reactors (PWRs). Recently, deep-learning-based models have proved to be a powerful feature extractor to perform high-accuracy prediction. However, the effectiveness of models still suffers from two issues in PWL prediction: the correlations shifting over time between PWL and other feature parameters, and the example imbalance between fluctuation examples (minority) and stable examples (majority). To address these problems, we propose a cost-sensitive mechanism to facilitate the model to learn the feature representation of later examples and fluctuation examples. By weighting the standard mean square error loss with a cost-sensitive factor, we develop a Cost-Sensitive Long Short-Term Memory (CSLSTM) model to predict the PWL of PWRs. The overall performance of the CSLSTM is assessed by a variety of evaluation metrics with the experimental data collected from a marine PWR simulator. The comparisons with the Long Short-Term Memory (LSTM) model and the Support Vector Regression (SVR) model demonstrate the effectiveness of the CSLSTM.

Classification of Human Papillomavirus (HPV) Risk Type via Text Mining

  • Park, Seong-Bae;Hwang, Sohyun;Zhang, Byoung-Tak
    • Genomics & Informatics
    • /
    • v.1 no.2
    • /
    • pp.80-86
    • /
    • 2003
  • Human Papillomavirus (HPV) infection is known as the main factor for cervical cancer which is a leading cause of cancer deaths in women worldwide. Because there are more than 100 types in HPV, it is critical to discriminate the HPVs related with cervical cancer from those not related with it. In this paper, the risk type of HPVs using their textual explanation. The important issue in this problem is to distinguish false negatives from false positives. That is, we must find high-risk HPVs as many as possible though we may miss some low-risk HPVs. For this purpose, the AdaCost, a cost-sensitive learner is adopted to consider different costs between training examples. The experimental results on the HPV sequence database show that the consideration of costs gives higher performance. The improvement in F-score is higher than that of the accuracy, which implies that the number of high-risk HPVs found is increased.

Secure Training Support Vector Machine with Partial Sensitive Part

  • Park, Saerom
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.1-9
    • /
    • 2021
  • In this paper, we propose a training algorithm of support vector machine (SVM) with a sensitive variable. Although machine learning models enable automatic decision making in the real world applications, regulations prohibit sensitive information from being used to protect privacy. In particular, the privacy protection of the legally protected attributes such as race, gender, and disability is compulsory. We present an efficient least square SVM (LSSVM) training algorithm using a fully homomorphic encryption (FHE) to protect a partial sensitive attribute. Our framework posits that data owner has both non-sensitive attributes and a sensitive attribute while machine learning service provider (MLSP) can get non-sensitive attributes and an encrypted sensitive attribute. As a result, data owner can obtain the encrypted model parameters without exposing their sensitive information to MLSP. In the inference phase, both non-sensitive attributes and a sensitive attribute are encrypted, and all computations should be conducted on encrypted domain. Through the experiments on real data, we identify that our proposed method enables to implement privacy-preserving sensitive LSSVM with FHE that has comparable performance with the original LSSVM algorithm. In addition, we demonstrate that the efficient sensitive LSSVM with FHE significantly improves the computational cost with a small degradation of performance.

Learning a Classifier for Weight Grouping of Export Containers (기계학습을 이용한 수출 컨테이너의 무게그룹 분류)

  • Kang, Jae-Ho;Kang, Byoung-Ho;Ryu, Kwang-Ryel;Kim, Kap-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.11 no.2
    • /
    • pp.59-79
    • /
    • 2005
  • Export containers in a container terminal are usually classified into a few weight groups and those belonging to the same group are placed together on a same stack. The reason for this stacking by weight groups is that it becomes easy to have the heavier containers be loaded onto a ship before the lighter ones, which is important for the balancing of the ship. However, since the weight information available at the time of container arrival is only an estimate, those belonging to different weight groups are often stored together on a same stack. This becomes the cause of extra moves, or rehandlings, of containers at the time of loading to fetch out the heavier containers placed under the lighter ones. In this paper, we use machine learning techniques to derive a classifier that can classify the containers into the weight groups with improved accuracy. We also show that a more useful classifier can be derived by applying a cost-sensitive learning technique, for which we introduce a scheme of searching for a good cost matrix. Simulation experiments have shown that our proposed method can reduce about 5$\sim$7% of rehandlings when compared to the traditional weight grouping method.

  • PDF