참고문헌
- Akaike H (1969). Fitting autoregressive models for prediction, Annals of the Institute of Statistical Mathematics, 21, 243-247. https://doi.org/10.1007/BF02532251
- Akaike H (1973). Information theory and an extension of the maximum likelihood principle. In Proceeding 2nd International Symposium on Information Theory, 267-281.
- Akaike H (1979). A Bayesian extension of the minimum AIC procedure of autoregressive model fitting, Biometrika, 66, 237-242. https://doi.org/10.1093/biomet/66.2.237
- Bollerslev T (1986). Generalized autoregressive conditional heteroskedasticity, Journal of Econometrics, 31, 307-327. https://doi.org/10.1016/0304-4076(86)90063-1
- Brockwell PJ and Davis RA (2006). Time Series: Theory and Methods (2nd ed), Springer, New York.
- Chen C (1999). Subset selection of autoregressive time series models, Journal of Forecasting, 18 505-516. https://doi.org/10.1002/(SICI)1099-131X(199912)18:7<505::AID-FOR728>3.0.CO;2-U
- Claeskens G, Croux C, and Van Kerckhoven J (2007). Prediction focused model selection for autoregressive models, The Australian and New Zealand Journal of Statistics, 49, 359-379. https://doi.org/10.1111/j.1467-842X.2007.00487.x
- Claeskens G and Hjort NL (2003). The focussed information criterion, Journal of the American Statistical Association, 98, 900-916. https://doi.org/10.1198/016214503000000819
- Fan J and Li R (2001). Variable selection via nonconcave penalized likelihood and its oracle properties, Journal of the American Statistical Association, 96, 1348-1360. https://doi.org/10.1198/016214501753382273
- Fan J and Peng H (2004). Nonconcave penalized likelihood with a diverging number of parameters, The Annals of Statistics, 32, 928-961. https://doi.org/10.1214/009053604000000256
- Friedman J, Hastie T, Hofling H, and Tibshirani R (2007). Pathwise coordinate optimization, The Annals of Applied Statistics, 1, 302-332. https://doi.org/10.1214/07-AOAS131
- Hannan EJ (1980). The estimation of the order of an ARMA process, The Annals of Statistics, 8, 1071-1081. https://doi.org/10.1214/aos/1176345144
- Hannan EJ and Quinn BG (1979). The determination of the order of an autoregression, Journal of Royal Statistical Society, 41, 190-195.
- Huang J, Horowitz JL, and Ma S (2008). Asymptotic properties of bridge estimators in sparse high-dimensional regression models, The Annals of Statistics, 36, 587-613. https://doi.org/10.1214/009053607000000875
- Kim Y, Choi H, and Oh H (2008). Smoothly clipped absolute deviation on high dimensions, Journal of the American Statistical Association, 103, 1656-1673.
- Kim Y, Jeon JJ, and Han S (2016). A necessary condition for the strong oracle property, Scandinavian Journal of Statistics, 43, 610-624. https://doi.org/10.1111/sjos.12195
- Kim Y and Kwon S (2012). Global optimality of nonconvex penalized estimators, Biometrika, 99, 315-325. https://doi.org/10.1093/biomet/asr084
- Kim Y, Kwon S, and Choi H (2012). Consistent model selection criteria on high dimensions, Journal of Machine Learning Research, 13, 1037-1057.
- Kwon S and Kim Y (2012). Large sample properties of the SCAD-penalized maximum likelihood estimation on high dimensions, Statistica Sinica, 22, 629-653.
- Kwon S, Lee S, and Na O (2017). Tuning parameter selection for the adaptive lasso in the autoregressive model, Journal of the Korean Statistical Society, 46, 285-297. https://doi.org/10.1016/j.jkss.2016.10.005
- Kwon S, Oh S, and Lee Y (2016). The use of random-effect models for high-dimensional variable selection problems, Computational Statistics & Data Analysis, 103, 401-412. https://doi.org/10.1016/j.csda.2016.05.016
- Lee S, Kwon S, and Kim Y (2016). A modified local quadratic approximation algorithm for penalized optimization problems, Computational Statistics & Data Analysis, 94, 275-286. https://doi.org/10.1016/j.csda.2015.08.019
- McClave J (1975). Subset autoregression, Technometrics, 17, 213-220. https://doi.org/10.2307/1268353
- McLeod AI and Zhang Y (2006). Partial autocorrelation parametrization for subset autoregression, Journal of Time Series Analysis, 27, 599-612. https://doi.org/10.1111/j.1467-9892.2006.00481.x
- Na O (2017). Generalized information criterion for the ar model, Journal of the Korean Statistical Society, 46, 146-160. https://doi.org/10.1016/j.jkss.2016.12.002
- Nardi Y and Rinaldo A (2011). Autoregressive process modeling via the lasso procedure, Journal of Multivariate Analysis, 102, 528-549. https://doi.org/10.1016/j.jmva.2010.10.012
- Sang H and Sun Y (2015). Simultaneous sparse model selection and coefficient estimation for heavy-tailed autoregressive processes, Statistics, 49, 187-208. https://doi.org/10.1080/02331888.2013.848865
- Sarkar A and Kanjilal PP (1995). On a method of identification of best subset model from full ar-model, Communications in Statistics-Theory and Methods, 24, 1551-1567. https://doi.org/10.1080/03610929508831571
- Schmidt DF and Makalic E (2013). Estimation of stationary autoregressive models with the Bayesian LASSO, Journal of Time Series Analysis, 34, 517-531. https://doi.org/10.1111/jtsa.12027
- Schwarz G (1978). Estimating the dimension of a model, The Annals of Statistics, 6, 461-464. https://doi.org/10.1214/aos/1176344136
- Shen X, Pan W, Zhu Y, and Zhou H (2013). On constrained and regularized high-dimensional regression, Annals of the Institute of Statistical Mathematics, 1, 1-26.
- Shibata R (1976). Selection of the order of an autoregressive model by Akaike's information criterion, Biometrika, 63, 117-126. https://doi.org/10.1093/biomet/63.1.117
- Tibshirani RJ (1996). Regression shrinkage and selection via the LASSO, Journal of the Royal Statistical Society, Series B, 58, 267-288.
- Tsay RS (1984). Order selection in nonstationary autoregressive models, The Annals of Statistics, 12, 1425-1433. https://doi.org/10.1214/aos/1176346801
- Wang H, Li B, and Leng C (2009). Shrinkage tuning parameter selection with a diverging number of parameters, Journal of Royal Statistical Society, Series B, 71, 671-683. https://doi.org/10.1111/j.1467-9868.2008.00693.x
- Wang H, Li R, and Tsai C (2007). Tuning parameter selectors for the smoothly clipped absolute deviation method, Biometrika, 94, 553-568. https://doi.org/10.1093/biomet/asm053
- Wu WB (2005). Nonlinear system theory: another look at dependence, Proceedings of the National Academy of Sciences of the United States of America, 102, 14150-14154. https://doi.org/10.1073/pnas.0506715102
- Wu WB (2011). Asymptotic theory for stationary processes, Statistics and Its Interface, 4, 207-226. https://doi.org/10.4310/SII.2011.v4.n2.a15
- Ye F and Zhang CH (2010). Rate Minimaxity of the Lasso and Dantzig selector for the lq loss in lr balls, Journal of Machine Learning Research, 11, 3519-3540.
- Zhang CH (2010a). Nearly unbiased variable selection under minimax concave penalty, The Annals of Statistics, 38, 894-942. https://doi.org/10.1214/09-AOS729
- Zhang CH and Zhang T (2012). A general theory of concave regularization for high-dimensional sparse estimation problems, Statistical Science, 27, 576-593. https://doi.org/10.1214/12-STS399
- Zhang T (2010b). Analysis of multi-stage convex relaxation for sparse regularization, Journal of Machine Learning Research, 11, 1081-1107.
- Zou H (2006). The adaptive lasso and its oracle properties, Journal of the American Statistical Association, 101, 1418-1429. https://doi.org/10.1198/016214506000000735
- Zou H and Li R (2008). One-step sparse estimates in nonconcave penalized likelihood models, The Annals of Statistics, 36, 1509-1533. https://doi.org/10.1214/009053607000000802