• Title/Summary/Keyword: Classification Variables

Search Result 920, Processing Time 0.022 seconds

Categorical Variable Selection in Naïve Bayes Classification (단순 베이즈 분류에서의 범주형 변수의 선택)

  • Kim, Min-Sun;Choi, Hosik;Park, Changyi
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.3
    • /
    • pp.407-415
    • /
    • 2015
  • $Na{\ddot{i}}ve$ Bayes Classification is based on input variables that are a conditionally independent given output variable. The $Na{\ddot{i}}ve$ Bayes assumption is unrealistic but simplifies the problem of high dimensional joint probability estimation into a series of univariate probability estimations. Thus $Na{\ddot{i}}ve$ Bayes classier is often adopted in the analysis of massive data sets such as in spam e-mail filtering and recommendation systems. In this paper, we propose a variable selection method based on ${\chi}^2$ statistic on input and output variables. The proposed method retains the simplicity of $Na{\ddot{i}}ve$ Bayes classier in terms of data processing and computation; however, it can select relevant variables. It is expected that our method can be useful in classification problems for ultra-high dimensional or big data such as the classification of diseases based on single nucleotide polymorphisms(SNPs).

A credit classification method based on generalized additive models using factor scores of mixtures of common factor analyzers (공통요인분석자혼합모형의 요인점수를 이용한 일반화가법모형 기반 신용평가)

  • Lim, Su-Yeol;Baek, Jang-Sun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.2
    • /
    • pp.235-245
    • /
    • 2012
  • Logistic discrimination is an useful statistical technique for quantitative analysis of financial service industry. Especially it is not only easy to be implemented, but also has good classification rate. Generalized additive model is useful for credit scoring since it has the same advantages of logistic discrimination as well as accounting ability for the nonlinear effects of the explanatory variables. It may, however, need too many additive terms in the model when the number of explanatory variables is very large and there may exist dependencies among the variables. Mixtures of factor analyzers can be used for dimension reduction of high-dimensional feature. This study proposes to use the low-dimensional factor scores of mixtures of factor analyzers as the new features in the generalized additive model. Its application is demonstrated in the classification of some real credit scoring data. The comparison of correct classification rates of competing techniques shows the superiority of the generalized additive model using factor scores.

ACCOUNTING FOR IMPORTANCE OF VARIABLES IN MUL TI-SENSOR DATA FUSION USING RANDOM FORESTS

  • Park No-Wook;Chi Kwang-Hoon
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.283-285
    • /
    • 2005
  • To account for the importance of variable in multi-sensor data fusion, random forests are applied to supervised land-cover classification. The random forests approach is a non-parametric ensemble classifier based on CART-like trees. Its distinguished feature is that the importance of variable can be estimated by randomly permuting the variable of interest in all the out-of-bag samples for each classifier. Supervised classification with a multi-sensor remote sensing data set including optical and polarimetric SAR data was carried out to illustrate the applicability of random forests. From the experimental result, the random forests approach could extract important variables or bands for land-cover discrimination and showed good performance, as compared with other non-parametric data fusion algorithms.

  • PDF

Performance Comparison of Mahalanobis-Taguchi System and Logistic Regression : A Case Study (마할라노비스-다구치 시스템과 로지스틱 회귀의 성능비교 : 사례연구)

  • Lee, Seung-Hoon;Lim, Geun
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.39 no.5
    • /
    • pp.393-402
    • /
    • 2013
  • The Mahalanobis-Taguchi System (MTS) is a diagnostic and predictive method for multivariate data. In the MTS, the Mahalanobis space (MS) of reference group is obtained using the standardized variables of normal data. The Mahalanobis space can be used for multi-class classification. Once this MS is established, the useful set of variables is identified to assist in the model analysis or diagnosis using orthogonal arrays and signal-to-noise ratios. And other several techniques have already been used for classification, such as linear discriminant analysis and logistic regression, decision trees, neural networks, etc. The goal of this case study is to compare the ability of the Mahalanobis-Taguchi System and logistic regression using a data set.

Accidents Model of Arterial Link Sections by Logistic Model (로지스틱모형을 이용한 가로구간 사고모형)

  • Park, Byung-Ho;Lim, Jin-Kang;Han, Su-San
    • Journal of the Korean Society of Safety
    • /
    • v.25 no.4
    • /
    • pp.90-95
    • /
    • 2010
  • This study deals with the accident model of arterial link section in Cheongju. The objective is to develop the accident model of arterial link section using the logistic regression. In pursuing the above, the study uses the 258 accident data occurred at the 322 arterial link section. The main results are as follows. First, Nagellerke $R^2$ of developed accident model is analyzed to be 0.309 and t-values of variable that explains goodness of fit are evaluated to be significant. Second, the variables adopted in the model are AADT, the number of exit and entry. These variables are all analyzed to be statistically significant. Finally, the analysis of correct classification rate shows that the total accident of correct classification rate is analyzed to be 72.7% at the arterial link section.

Discriminant Analysis of Binary Data with Multinomial Distribution by Using the Iterative Cross Entropy Minimization Estimation

  • Lee Jung Jin
    • Communications for Statistical Applications and Methods
    • /
    • v.12 no.1
    • /
    • pp.125-137
    • /
    • 2005
  • Many discriminant analysis models for binary data have been used in real applications, but none of the classification models dominates in all varying circumstances(Asparoukhov & Krzanowski(2001)). Lee and Hwang (2003) proposed a new classification model by using multinomial distribution with the maximum entropy estimation method. The model showed some promising results in case of small number of variables, but its performance was not satisfactory for large number of variables. This paper explores to use the iterative cross entropy minimization estimation method in replace of the maximum entropy estimation. Simulation experiments show that this method can compete with other well known existing classification models.

Analysis on the Structure of Plant Community in Mt. Yongmun by Classification and Ordination Techniques (Classification 및 Ordination 방법에 의한 융문산 삼림의 식물군집 구조분석)

  • 이경재
    • Journal of Plant Biology
    • /
    • v.33 no.3
    • /
    • pp.173-182
    • /
    • 1990
  • To investigate the structure of the plant community structure of Mt. Yongmun in Kyonggi-do, fifty-four plots were set up by the clumped sampling method. The classification by TWINSPAN and DCA ordination were applied to the study area in order to classify them into several groups based on woody plant and environmental variables. By both techniques, the plant community were divided into two groups by the aspect. the dominant species of south aspect were Pinus densiflora, Quercus aliena, Q. mongolica, Carpinus laxiflora and of north aspect were Q. ongolica, Fraxinus rhynchophylla. The successional trends of tree species in south aspect seem to be from P. densiflora through Q. serrata, Q. aliena, A. mongolica to C. laxiflora. As a result of the analysis for the relationship between the stand scores of DCA and environmental variables, they had a tendency to increase significantly from the P. densiflora and Q. mongolica community to C. laxiflora and F. rhynchophylla community that was the soil moisture, the amount of soil humus and soil pH.

  • PDF

Prediction of extreme PM2.5 concentrations via extreme quantile regression

  • Lee, SangHyuk;Park, Seoncheol;Lim, Yaeji
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.3
    • /
    • pp.319-331
    • /
    • 2022
  • In this paper, we develop a new statistical model to forecast the PM2.5 level in Seoul, South Korea. The proposed model is based on the extreme quantile regression model with lasso penalty. Various meteorological variables and air pollution variables are considered as predictors in the regression model, and the lasso quantile regression performs variable selection and solves the multicollinearity problem. The final prediction model is obtained by combining various extreme lasso quantile regression estimators and we construct a binary classifier based on the model. Prediction performance is evaluated through the statistical measures of the performance of a binary classification test. We observe that the proposed method works better compared to the other classification methods, and predicts 'very bad' cases of the PM2.5 level well.

A Study on Improving Classification Performance for Manufacturing Process Data with Multicollinearity and Imbalanced Distribution (다중공선성과 불균형분포를 가지는 공정데이터의 분류 성능 향상에 관한 연구)

  • Lee, Chae Jin;Park, Cheong-Sool;Kim, Jun Seok;Baek, Jun-Geol
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.41 no.1
    • /
    • pp.25-33
    • /
    • 2015
  • From the viewpoint of applications to manufacturing, data mining is a useful method to find the meaningful knowledge or information about states of processes. But the data from manufacturing processes usually have two characteristics which are multicollinearity and imbalance distribution of data. Two characteristics are main causes which make bias to classification rules and select wrong variables as important variables. In the paper, we propose a new data mining procedure to solve the problem. First, to determine candidate variables, we propose the multiple hypothesis test. Second, to make unbiased classification rules, we propose the decision tree learning method with different weights for each category of quality variable. The experimental result with a real PDP (Plasma display panel) manufacturing data shows that the proposed procedure can make better information than other data mining procedures.

Optimization of Decision Tree for Classification Using a Particle Swarm

  • Cho, Yun-Ju;Lee, Hye-Seon;Jun, Chi-Hyuck
    • Industrial Engineering and Management Systems
    • /
    • v.10 no.4
    • /
    • pp.272-278
    • /
    • 2011
  • Decision tree as a classification tool is being used successfully in many areas such as medical diagnosis, customer churn prediction, signal detection and so on. The main advantage of decision tree classifiers is their capability to break down a complex structure into a collection of simpler structures, thus providing a solution that is easy to interpret. Since decision tree is a top-down algorithm using a divide and conquer induction process, there is a risk of reaching a local optimal solution. This paper proposes a procedure of optimally determining thresholds of the chosen variables for a decision tree using an adaptive particle swarm optimization (APSO). The proposed algorithm consists of two phases. First, we construct a decision tree and choose the relevant variables. Second, we find the optimum thresholds simultaneously using an APSO for those selected variables. To validate the proposed algorithm, several artificial and real datasets are used. We compare our results with the original CART results and show that the proposed algorithm is promising for improving prediction accuracy.