• Title/Summary/Keyword: penalized

Search Result 169, Processing Time 0.023 seconds

Detection of multiple change points using penalized least square methods: a comparative study between ℓ0 and ℓ1 penalty (벌점-최소제곱법을 이용한 다중 변화점 탐색)

  • Son, Won;Lim, Johan;Yu, Donghyeon
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.6
    • /
    • pp.1147-1154
    • /
    • 2016
  • In this paper, we numerically compare two penalized least square methods, the ${\ell}_0$-penalized method and the fused lasso regression (FLR, ${\ell}_1$ penalization), in finding multiple change points of a signal. We find that the ${\ell}_0$-penalized method performs better than the FLR, which produces many false detections in some cases as the theory tells. In addition, the computation of ${\ell}_0$-penalized method relies on dynamic programming and is as efficient as the FLR.

Computation and Smoothing Parameter Selection In Penalized Likelihood Regression

  • Kim Young-Ju
    • Communications for Statistical Applications and Methods
    • /
    • v.12 no.3
    • /
    • pp.743-758
    • /
    • 2005
  • This paper consider penalized likelihood regression with data from exponential family. The fast computation method applied to Gaussian data(Kim and Gu, 2004) is extended to non Gaussian data through asymptotically efficient low dimensional approximations and corresponding algorithm is proposed. Also smoothing parameter selection is explored for various exponential families, which extends the existing cross validation method of Xiang and Wahba evaluated only with Bernoulli data.

Estimating Parameters in Muitivariate Normal Mixtures

  • Ahn, Sung-Mahn;Baik, Sung-Wook
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.3
    • /
    • pp.357-365
    • /
    • 2011
  • This paper investigates a penalized likelihood method for estimating the parameter of normal mixtures in multivariate settings with full covariance matrices. The proposed model estimates the number of components through the addition of a penalty term to the usual likelihood function and the construction of a penalized likelihood function. We prove the consistency of the estimator and present the simulation results on the multi-dimensional nor-mal mixtures up to the 8-dimension.

A correction of SE from penalized partial likelihood in frailty models

  • Ha, Il-Do
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.5
    • /
    • pp.895-903
    • /
    • 2009
  • The penalized partial likelihood based on restricted maximum likelihood method has been widely used for the inference of frailty models. However, the standard-error estimate for frailty parameter estimator can be downwardly biased. In this paper we show that such underestimation can be corrected by using hierarchical likelihood. In particular, the hierarchical likelihood gives a statistically efficient procedure for various random-effect models including frailty models. The proposed method is illustrated via a numerical example and simulation study. The simulation results demonstrate that the corrected standard-error estimate largely improves such bias.

  • PDF

A Penalized Principal Components using Probabilistic PCA

  • Park, Chong-Sun;Wang, Morgan
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.05a
    • /
    • pp.151-156
    • /
    • 2003
  • Variable selection algorithm for principal component analysis using penalized likelihood method is proposed. We will adopt a probabilistic principal component idea to utilize likelihood function for the problem and use HARD penalty function to force coefficients of any irrelevant variables for each component to zero. Consistency and sparsity of coefficient estimates will be provided with results of small simulated and illustrative real examples.

  • PDF

A Penalized Likelihood Method for Model Complexity

  • Ahn, Sung M.
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.1
    • /
    • pp.173-184
    • /
    • 2001
  • We present an algorithm for the complexity reduction of a general Gaussian mixture model by using a penalized likelihood method. One of our important assumptions is that we begin with an overfitted model in terms of the number of components. So our main goal is to eliminate redundant components in the overfitted model. As shown in the section of simulation results, the algorithm works well with the selected densities.

  • PDF

Penalized Likelihood Regression: Fast Computation and Direct Cross-Validation

  • Kim, Young-Ju;Gu, Chong
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2005.05a
    • /
    • pp.215-219
    • /
    • 2005
  • We consider penalized likelihood regression with exponential family responses. Parallel to recent development in Gaussian regression, the fast computation through asymptotically efficient low-dimensional approximations is explored, yielding algorithm that scales much better than the O($n^3$) algorithm for the exact solution. Also customizations of the direct cross-validation strategy for smoothing parameter selection in various distribution families are explored and evaluated.

  • PDF

Tests of Hypotheses in Multiple Samples based on Penalized Disparities

  • Park, Chanseok;Ayanendranath Basu;Ian R. Harris
    • Journal of the Korean Statistical Society
    • /
    • v.30 no.3
    • /
    • pp.347-366
    • /
    • 2001
  • Robust analogues of the likelihood ratio test are considered for testing of hypotheses involving multiple discrete distributions. The test statistics are generalizations of the Hellinger deviance test of Simpson(1989) and disparity tests of Lindsay(1994), obtained by looking at a 'penalized' version of the distances; harris and Basu (1994) suggest that the penalty be based on reweighting the empty cells. The results show that often the tests based on the ordinary and penalized distances enjoy better robustness properties than the likelihood ratio test. Also, the tests based on the penalized distances are improvements over those based on the ordinary distances in that they are much closer to the likelihood ratio tests at the null and their convergence to the x$^2$ distribution appears to be dramatically faster; extensive simulation results show that the improvement in performance of the tests due to the penalty is often substantial in small samples.

  • PDF

A small review and further studies on the LASSO

  • Kwon, Sunghoon;Han, Sangmi;Lee, Sangin
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.5
    • /
    • pp.1077-1088
    • /
    • 2013
  • High-dimensional data analysis arises from almost all scientific areas, evolving with development of computing skills, and has encouraged penalized estimations that play important roles in statistical learning. For the past years, various penalized estimations have been developed, and the least absolute shrinkage and selection operator (LASSO) proposed by Tibshirani (1996) has shown outstanding ability, earning the first place on the development of penalized estimation. In this paper, we first introduce a number of recent advances in high-dimensional data analysis using the LASSO. The topics include various statistical problems such as variable selection and grouped or structured variable selection under sparse high-dimensional linear regression models. Several unsupervised learning methods including inverse covariance matrix estimation are presented. In addition, we address further studies on new applications which may establish a guideline on how to use the LASSO for statistical challenges of high-dimensional data analysis.

Variable Selection Via Penalized Regression

  • Yoon, Young-Joo;Song, Moon-Sup
    • Communications for Statistical Applications and Methods
    • /
    • v.12 no.3
    • /
    • pp.615-624
    • /
    • 2005
  • In this paper, we review the variable-selection properties of LASSO and SCAD in penalized regression. To improve the weakness of SCAD for high noise level, we propose a new penalty function called MSCAD which relaxes the unbiasedness condition of SCAD. In order to compare MSCAD with LASSO and SCAD, comparative studies are performed on simulated datasets and also on a real dataset. The performances of penalized regression methods are compared in terms of relative model error and the estimates of coefficients. The results of experiments show that the performance of MSCAD is between those of LASSO and SCAD as expected.