References
- Breiman L (1996). Heuristics of instability and stabilization in model selection, The Annals of Statistics, 24, 2350-2383. https://doi.org/10.1214/aos/1032181158
- Efron B, Hastie T, Johnstone I, Tibshirani R (2004). Least angle regression, The Annals of Statistics, 32 407-499. https://doi.org/10.1214/009053604000000067
- Fan J and Li R (2001). Variable selection via nonconcave penalized likelihood and its oracle properties, Journal of the American Statistical Association, 96 1348-1360. https://doi.org/10.1198/016214501753382273
- Fan J and Peng H (2004). Nonconcave penalized likelihood with a diverging number of parameters, The Annals of Statistics, 32 928-961. https://doi.org/10.1214/009053604000000256
- Friedman J, Hastie T, Hofling H, and Tibshirani R (2007). Pathwise coordinate optimization, The Annals of Applied Statistics, 1 302-332. https://doi.org/10.1214/07-AOAS131
- Fu WJ (1998). Penalized regressions: the bridge versus the lasso, Journal of Computational and Graphical Statistics, 7, 397-416. https://doi.org/10.2307/1390712
- Huang J, Breheny P, Lee S, Ma S, and Zhang C (2016). The Mnet method for variable selection, Statistica Sinica, 26, 903-923.
- Kim Y, Choi H, and Oh HS (2008). Smoothly clipped absolute deviation on high dimensions, Journal of the American Statistical Association, 103, 1665-1673. https://doi.org/10.1198/016214508000001066
- Kim Y and Kwon S (2012). Global optimality of nonconvex penalized estimators. Biometrika, 99, 315-325. https://doi.org/10.1093/biomet/asr084
- Kwon S and Kim Y (2012). Large sample properties of the scad-penalized maximum likelihood estimation on high dimensions, Statistica Sinica, 22, 629-653.
- Kwon S, Kim Y, and Choi H (2013). Sparse bridge estimation with a diverging number of parameters, Statistics and Its Interface, 6, 231-242. https://doi.org/10.4310/SII.2013.v6.n2.a7
- Kwon S, Lee S, and Kim Y (2015). Moderately clipped lasso, Computational Statistics & Data Analysis, 92, 53-67. https://doi.org/10.1016/j.csda.2015.07.001
- Lee S, Kwon S, and Kim Y (2016). A modified local quadratic approximation algorithm for penalized optimization problems, Computational Statistics & Data Analysis, 94, 275-286. https://doi.org/10.1016/j.csda.2015.08.019
- Tibshirani R (1996). Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society. Series B (Methodological), 58, 267-288. https://doi.org/10.1111/j.2517-6161.1996.tb02080.x
- Yuille AL and Rangarajan A (2003). The concave-convex procedure (CCCP), Neural Computation, 15, 915-936. https://doi.org/10.1162/08997660360581958
- Zhang CH (2010). Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics, 38, 894-942. https://doi.org/10.1214/09-AOS729
- Zhang CH and Huang J (2008). The sparsity and bias of the lasso selection in high-dimensional linear regression, The Annals of Statistics, 36, 1567-1594. https://doi.org/10.1214/07-AOS520
- Zhang CH and Zhang T (2012). A general theory of concave regularization for high-dimensional sparse estimation problems, Statistical Science, 27, 576-593. https://doi.org/10.1214/12-STS399
- Zhao P and Yu B (2006). On model selection consistency of lasso, The Journal of Machine Learning Research, 7, 2541-2563.
- Zou H and Hastie T (2005). Regularization and variable selection via the elastic net, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67, 301-320. https://doi.org/10.1111/j.1467-9868.2005.00503.x