• Title/Summary/Keyword: Categorical Variables

Search Result 215, Processing Time 0.024 seconds

Variable Selection for Multi-Purpose Multivariate Data Analysis (다목적 다변량 자료분석을 위한 변수선택)

  • Huh, Myung-Hoe;Lim, Yong-Bin;Lee, Yong-Goo
    • The Korean Journal of Applied Statistics
    • /
    • v.21 no.1
    • /
    • pp.141-149
    • /
    • 2008
  • Recently we frequently analyze multivariate data with quite large number of variables. In such data sets, virtually duplicated variables may exist simultaneously even though they are conceptually distinguishable. Duplicate variables may cause problems such as the distortion of principal axes in principal component analysis and factor analysis and the distortion of the distances between observations, i.e. the input for cluster analysis. Also in supervised learning or regression analysis, duplicated explanatory variables often cause the instability of fitted models. Since real data analyses are aimed often at multiple purposes, it is necessary to reduce the number of variables to a parsimonious level. The aim of this paper is to propose a practical algorithm for selection of a subset of variables from a given set of p input variables, by the criterion of minimum trace of partial variances of unselected variables unexplained by selected variables. The usefulness of proposed method is demonstrated in visualizing the relationship between selected and unselected variables, in building a predictive model with very large number of independent variables, and in reducing the number of variables and purging/merging categories in categorical data.

Variable selection for latent class analysis using clustering efficiency (잠재변수 모형에서의 군집효율을 이용한 변수선택)

  • Kim, Seongkyung;Seo, Byungtae
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.6
    • /
    • pp.721-732
    • /
    • 2018
  • Latent class analysis (LCA) is an important tool to explore unseen latent groups in multivariate categorical data. In practice, it is important to select a suitable set of variables because the inclusion of too many variables in the model makes the model complicated and reduces the accuracy of the parameter estimates. Dean and Raftery (Annals of the Institute of Statistical Mathematics, 62, 11-35, 2010) proposed a headlong search algorithm based on Bayesian information criteria values to choose meaningful variables for LCA. In this paper, we propose a new variable selection procedure for LCA by utilizing posterior probabilities obtained from each fitted model. We propose a new statistic to measure the adequacy of LCA and develop a variable selection procedure. The effectiveness of the proposed method is also presented through some numerical studies.

An educational tool for regression models with dummy variables using Excel VBA (엑셀 VBA을 이용한 가변수 회귀모형 교육도구 개발)

  • Choi, Hyun Seok;Park, Cheolyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.3
    • /
    • pp.593-601
    • /
    • 2013
  • We often need to include categorial variables as explanatory variables in regression models. The categorial variables in regression models can be quantified through dummy variables. In this study, we provide an education tool using Excel VBA for displaying regression lines along with test results for regression models with a continuous explanatory variable and one or two categorical explanatory variables. The regression lines with test results are provided step by step for the model(s) with interaction(s), the model(s) without interaction(s) but with dummy variables, and the model without dummy variable(s). With this tool, we can easily understand the meaning of dummy variables and interaction effect through graphics and further decide which model is more suited to the data on hand.

Segmentation of Cooperatives' Mutuality Bank for Effective Risk Management using Factor Analysis and Cluster Analysis

  • Cho, Yong-Jun;Ko, Seoung-Gon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.3
    • /
    • pp.831-844
    • /
    • 2008
  • Since cooperatives consist of many distinct members in the management environment and characteristics, it is necessary to make similar cooperatives into a few groups for the effective risk management of cooperatives' mutuality bank. This paper is a priori research for suggesting a guidance for effective risk management of cooperatives with different management strategy. For such purpose, we propose a way to group the members of cooperative's mutuality bank. The 30 continuous variables which is relative to cooperatives' management status are considered and six factors are extracted from those variables through factor analysis with empirical consideration to avoid wrong grouping and to enhance the practical interpretation. Based on extracted six factors and additional 3 categorical variables, six representative groups are derived by the two step clustering analysis. These findings are useful to execute a discriminatory risk management and other management strategy for a mutuality bank and others.

  • PDF

Analysis of Large Tables (대규모 분할표 분석)

  • Choi, Hyun-Jip
    • The Korean Journal of Applied Statistics
    • /
    • v.18 no.2
    • /
    • pp.395-410
    • /
    • 2005
  • For the analysis of large tables formed by many categorical variables, we suggest a method to group the variables into several disjoint groups in which the variables are completely associated within the groups. We use a simple function of Kullback-Leibler divergence as a similarity measure to find the groups. Since the groups are complete hierarchical sets, we can identify the association structure of the large tables by the marginal log-linear models. Examples are introduced to illustrate the suggested method.

Input Variable Importance in Supervised Learning Models

  • Huh, Myung-Hoe;Lee, Yong Goo
    • Communications for Statistical Applications and Methods
    • /
    • v.10 no.1
    • /
    • pp.239-246
    • /
    • 2003
  • Statisticians, or data miners, are often requested to assess the importances of input variables in the given supervised learning model. For the purpose, one may rely on separate ad hoc measures depending on modeling types, such as linear regressions, the neural networks or trees. Consequently, the conceptual consistency in input variable importance measures is lacking, so that the measures cannot be directly used in comparing different types of models, which is often done in data mining processes, In this short communication, we propose a unified approach to the importance measurement of input variables. Our method uses sensitivity analysis which begins by perturbing the values of input variables and monitors the output change. Research scope is limited to the models for continuous output, although it is not difficult to extend the method to supervised learning models for categorical outcomes.

Neural Networks and Logistic Models for Classification: A Case Study

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.7 no.1
    • /
    • pp.13-19
    • /
    • 1996
  • In this paper, we study and compare two types of methods for classification when both continuous and categorical variables are used to describe each individual. One is neural network(NN) method using backpropagation learning(BPL). The other is logistic model(LM) method. Both the NN and LM are based on projections of the data in directions determined from interconnection weights.

  • PDF

Bias Reduction in Split Variable Selection in C4.5

  • Shin, Sung-Chul;Jeong, Yeon-Joo;Song, Moon Sup
    • Communications for Statistical Applications and Methods
    • /
    • v.10 no.3
    • /
    • pp.627-635
    • /
    • 2003
  • In this short communication we discuss the bias problem of C4.5 in split variable selection and suggest a method to reduce the variable selection bias among categorical predictor variables. A penalty proportional to the number of categories is applied to the splitting criterion gain of C4.5. The results of empirical comparisons show that the proposed modification of C4.5 reduces the size of classification trees.

The Confidence Intervals for Logistic Model in Contingency Table

  • Cho, Tae-Kyoung
    • Communications for Statistical Applications and Methods
    • /
    • v.10 no.3
    • /
    • pp.997-1005
    • /
    • 2003
  • We can use the logistic model for categorical data when the response variables are binary data. In this paper we consider the problem of constructing the confidence intervals for logistic model in I${\times}$J${\times}$2 contingency table. These constructions are simplified by applying logit transformation. This transforms the problem to consider linear form which called the logit model. After obtaining the confidence intervals for the logit model, the reverse transform is applied to obtain the confidence intervals for the logistic model.

A Post Stratification and Calibration under the Unit Nonresponse (단위 무응답 하에서 사후층화와 보정에 관하여)

  • 손창균;홍기학;이기성
    • Proceedings of the Korean Association for Survey Research Conference
    • /
    • 2001.06a
    • /
    • pp.57-70
    • /
    • 2001
  • In this paper we consider a various estimation methods including the post-stratification estimation, regression estimation and calibration estimation or a generalized raking estimation under a unit nonresponse. All of them have a common type of calibration estimation based on the post-stratification for a categorical auxiliary variables.

  • PDF