• 제목/요약/키워드: penalized variable selection

검색결과 42건 처리시간 0.026초

벌점화 분위수 회귀나무모형에 대한 연구 (Penalized quantile regression tree)

  • 김재오;조형준;방성완
    • 응용통계연구
    • /
    • 제29권7호
    • /
    • pp.1361-1371
    • /
    • 2016
  • 분위수 회귀모형은 설명변수가 반응변수의 조건부 분위수 함수에 어떻게 관계되는지 탐색함으로서 많은 유용한 정보를 제공한다. 그러나 설명변수와 반응변수가 비선형 관계를 갖는다면 선형형태를 가정하는 전통적인 분위수 회귀모형은 적합하지 않다. 또한 고차원 자료 또는 설명변수간 상관관계가 높은 자료에 대해서 변수선택의 방법이 필요하다. 이러한 이유로 본 연구에서는 벌점화 분위수 회귀나무모형을 제안하였다. 한편 제안한 방법의 분할규칙은 과도한 계산시간과 분할변수 선택편향 문제를 극복한 잔차 분석을 기반으로 하였다. 본 연구에서는 모의실험과 실증 예제를 통해 제안한 방법의 우수한 성능과 유용성을 확인하였다.

A convenient approach for penalty parameter selection in robust lasso regression

  • Kim, Jongyoung;Lee, Seokho
    • Communications for Statistical Applications and Methods
    • /
    • 제24권6호
    • /
    • pp.651-662
    • /
    • 2017
  • We propose an alternative procedure to select penalty parameter in $L_1$ penalized robust regression. This procedure is based on marginalization of prior distribution over the penalty parameter. Thus, resulting objective function does not include the penalty parameter due to marginalizing it out. In addition, its estimating algorithm automatically chooses a penalty parameter using the previous estimate of regression coefficients. The proposed approach bypasses cross validation as well as saves computing time. Variable-wise penalization also performs best in prediction and variable selection perspectives. Numerical studies using simulation data demonstrate the performance of our proposals. The proposed methods are applied to Boston housing data. Through simulation study and real data application we demonstrate that our proposals are competitive to or much better than cross-validation in prediction, variable selection, and computing time perspectives.

Efficient estimation and variable selection for partially linear single-index-coefficient regression models

  • Kim, Young-Ju
    • Communications for Statistical Applications and Methods
    • /
    • 제26권1호
    • /
    • pp.69-78
    • /
    • 2019
  • A structured model with both single-index and varying coefficients is a powerful tool in modeling high dimensional data. It has been widely used because the single-index can overcome the curse of dimensionality and varying coefficients can allow nonlinear interaction effects in the model. For high dimensional index vectors, variable selection becomes an important question in the model building process. In this paper, we propose an efficient estimation and a variable selection method based on a smoothing spline approach in a partially linear single-index-coefficient regression model. We also propose an efficient algorithm for simultaneously estimating the coefficient functions in a data-adaptive lower-dimensional approximation space and selecting significant variables in the index with the adaptive LASSO penalty. The empirical performance of the proposed method is illustrated with simulated and real data examples.

Variable selection in censored kernel regression

  • Choi, Kook-Lyeol;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • 제24권1호
    • /
    • pp.201-209
    • /
    • 2013
  • For censored regression, it is often the case that some input variables are not important, while some input variables are more important than others. We propose a novel algorithm for selecting such important input variables for censored kernel regression, which is based on the penalized regression with the weighted quadratic loss function for the censored data, where the weight is computed from the empirical survival function of the censoring variable. We employ the weighted version of ANOVA decomposition kernels to choose optimal subset of important input variables. Experimental results are then presented which indicate the performance of the proposed variable selection method.

Pruning the Boosting Ensemble of Decision Trees

  • Yoon, Young-Joo;Song, Moon-Sup
    • Communications for Statistical Applications and Methods
    • /
    • 제13권2호
    • /
    • pp.449-466
    • /
    • 2006
  • We propose to use variable selection methods based on penalized regression for pruning decision tree ensembles. Pruning methods based on LASSO and SCAD are compared with the cluster pruning method. Comparative studies are performed on some artificial datasets and real datasets. According to the results of comparative studies, the proposed methods based on penalized regression reduce the size of boosting ensembles without decreasing accuracy significantly and have better performance than the cluster pruning method. In terms of classification noise, the proposed pruning methods can mitigate the weakness of AdaBoost to some degree.

Prediction of Quantitative Traits Using Common Genetic Variants: Application to Body Mass Index

  • Bae, Sunghwan;Choi, Sungkyoung;Kim, Sung Min;Park, Taesung
    • Genomics & Informatics
    • /
    • 제14권4호
    • /
    • pp.149-159
    • /
    • 2016
  • With the success of the genome-wide association studies (GWASs), many candidate loci for complex human diseases have been reported in the GWAS catalog. Recently, many disease prediction models based on penalized regression or statistical learning methods were proposed using candidate causal variants from significant single-nucleotide polymorphisms of GWASs. However, there have been only a few systematic studies comparing existing methods. In this study, we first constructed risk prediction models, such as stepwise linear regression (SLR), least absolute shrinkage and selection operator (LASSO), and Elastic-Net (EN), using a GWAS chip and GWAS catalog. We then compared the prediction accuracy by calculating the mean square error (MSE) value on data from the Korea Association Resource (KARE) with body mass index. Our results show that SLR provides a smaller MSE value than the other methods, while the numbers of selected variables in each model were similar.

L1-penalized AUC-optimization with a surrogate loss

  • Hyungwoo Kim;Seung Jun Shin
    • Communications for Statistical Applications and Methods
    • /
    • 제31권2호
    • /
    • pp.203-212
    • /
    • 2024
  • The area under the ROC curve (AUC) is one of the most common criteria used to measure the overall performance of binary classifiers for a wide range of machine learning problems. In this article, we propose a L1-penalized AUC-optimization classifier that directly maximizes the AUC for high-dimensional data. Toward this, we employ the AUC-consistent surrogate loss function and combine the L1-norm penalty which enables us to estimate coefficients and select informative variables simultaneously. In addition, we develop an efficient optimization algorithm by adopting k-means clustering and proximal gradient descent which enjoys computational advantages to obtain solutions for the proposed method. Numerical simulation studies demonstrate that the proposed method shows promising performance in terms of prediction accuracy, variable selectivity, and computational costs.

베이즈 정보 기준을 활용한 분할-정복 벌점화 분위수 회귀 (Model selection via Bayesian information criterion for divide-and-conquer penalized quantile regression)

  • 강종경;한석원;방성완
    • 응용통계연구
    • /
    • 제35권2호
    • /
    • pp.217-227
    • /
    • 2022
  • 분위수 회귀 모형은 변수에 숨겨진 복잡한 정보를 살펴보기 위한 효율적인 도구를 제공하는 장점을 바탕으로 많은 분야에서 널리 사용되고 있다. 그러나 현대의 대용량-고차원 데이터는 계산 시간 및 저장공간의 제한으로 인해 분위수 회귀 모형의 추정을 매우 어렵게 만든다. 분할-정복은 전체 데이터를 계산이 용이한 여러개의 부분집합으로 나눈 다음 각 분할에서의 요약 통계량만을 이용하여 전체 데이터의 추정량을 재구성하는 기법이다. 본 연구에서는 분할-정복 기법을 벌점화 분위수 회귀에 적용하고 베이즈 정보기준을 활용하여 변수를 선택하는 방법에 관하여 연구하였다. 제안 방법은 분할 수를 적절하게 선택하였을 때, 전체 데이터로 계산한 일반적인 분위수 회귀 추정량만큼 변수 선택의 측면에서 일관된 결과를 제공하면서 계산 속도의 측면에서 효율적이다. 이러한 제안된 방법의 장점은 시뮬레이션 데이터 및 실제 데이터 분석을 통해 확인하였다.

그룹변수를 포함하는 불균형 자료의 분류분석을 위한 서포트 벡터 머신 (Hierarchically penalized support vector machine for the classication of imbalanced data with grouped variables)

  • 김은경;전명식;방성완
    • 응용통계연구
    • /
    • 제29권5호
    • /
    • pp.961-975
    • /
    • 2016
  • H-SVM은 입력변수들이 그룹화 되어 있는 경우 분류함수의 추정에서 그룹 및 그룹 내의 변수선택을 동시에 할 수 있는 방법론이다. 그러나 H-SVM은 입력변수들의 중요도에 상관없이 모든 변수들을 동일하게 축소 추정하기 때문에 추정의 효율성이 감소될 수 있다. 또한, 집단별 개체수가 상이한 불균형 자료의 분류분석에서는 분류함수가 편향되어 추정되므로 소수집단의 예측력이 하락할 수 있다. 이러한 문제점들을 보완하기 위해 본 논문에서는 적응적 조율모수를 사용하여 변수선택의 성능을 개선하고 집단별 오분류 비용을 차등적으로 부여하는 WAH-SVM을 제안하였다. 또한, 모의실험과 실제자료 분석을 통하여 제안한 모형과 기존 방법론들의 성능 비교하였으며, 제안한 모형의 유용성과 활용 가능성 확인하였다.

A two-step approach for variable selection in linear regression with measurement error

  • Song, Jiyeon;Shin, Seung Jun
    • Communications for Statistical Applications and Methods
    • /
    • 제26권1호
    • /
    • pp.47-55
    • /
    • 2019
  • It is important to identify informative variables in high dimensional data analysis; however, it becomes a challenging task when covariates are contaminated by measurement error due to the bias induced by measurement error. In this article, we present a two-step approach for variable selection in the presence of measurement error. In the first step, we directly select important variables from the contaminated covariates as if there is no measurement error. We then apply, in the following step, orthogonal regression to obtain the unbiased estimates of regression coefficients identified in the previous step. In addition, we propose a modification of the two-step approach to further enhance the variable selection performance. Various simulation studies demonstrate the promising performance of the proposed method.