• Title/Summary/Keyword: moderately clipped LASSO

Search Result 2, Processing Time 0.013 seconds

Moderately clipped LASSO for the high-dimensional generalized linear model

  • Lee, Sangin;Ku, Boncho;Kown, Sunghoon
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.4
    • /
    • pp.445-458
    • /
    • 2020
  • The least absolute shrinkage and selection operator (LASSO) is a popular method for a high-dimensional regression model. LASSO has high prediction accuracy; however, it also selects many irrelevant variables. In this paper, we consider the moderately clipped LASSO (MCL) for the high-dimensional generalized linear model which is a hybrid method of the LASSO and minimax concave penalty (MCP). The MCL preserves advantages of the LASSO and MCP since it shows high prediction accuracy and successfully selects relevant variables. We prove that the MCL achieves the oracle property under some regularity conditions, even when the number of parameters is larger than the sample size. An efficient algorithm is also provided. Various numerical studies confirm that the MCL can be a better alternative to other competitors.

High-dimensional linear discriminant analysis with moderately clipped LASSO

  • Chang, Jaeho;Moon, Haeseong;Kwon, Sunghoon
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.1
    • /
    • pp.21-37
    • /
    • 2021
  • There is a direct connection between linear discriminant analysis (LDA) and linear regression since the direction vector of the LDA can be obtained by the least square estimation. The connection motivates the penalized LDA when the model is high-dimensional where the number of predictive variables is larger than the sample size. In this paper, we study the penalized LDA for a class of penalties, called the moderately clipped LASSO (MCL), which interpolates between the least absolute shrinkage and selection operator (LASSO) and minimax concave penalty. We prove that the MCL penalized LDA correctly identifies the sparsity of the Bayes direction vector with probability tending to one, which is supported by better finite sample performance than LASSO based on concrete numerical studies.