• Title/Summary/Keyword: penalized

Search Result 169, Processing Time 0.019 seconds

EM Algorithm-based Segmentation of Magnetic Resonance Image Corrupted by Bias Field (바이어스필드에 의해 왜곡된 MRI 영상자료분할을 위한 EM 알고리즘 기반 접근법)

  • 김승구
    • The Korean Journal of Applied Statistics
    • /
    • v.16 no.2
    • /
    • pp.305-319
    • /
    • 2003
  • This paper provides a non-Bayesian method based on the expanded EM algorithm for segmenting the magnetic resonance images degraded by bias field. For the images with the intensity as a pixel value, many segmentation methods often fail to segment it because of the bias field(with low frequency) as well as noise(with high frequency). Our contextual approach is appropriately designed by using normal mixture model incorporated with Markov random field for noise-corrective segmentation and by using the penalized likelihood to estimate bias field for efficient bias filed-correction.

An Additive Sparse Penalty for Variable Selection in High-Dimensional Linear Regression Model

  • Lee, Sangin
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.2
    • /
    • pp.147-157
    • /
    • 2015
  • We consider a sparse high-dimensional linear regression model. Penalized methods using LASSO or non-convex penalties have been widely used for variable selection and estimation in high-dimensional regression models. In penalized regression, the selection and prediction performances depend on which penalty function is used. For example, it is known that LASSO has a good prediction performance but tends to select more variables than necessary. In this paper, we propose an additive sparse penalty for variable selection using a combination of LASSO and minimax concave penalties (MCP). The proposed penalty is designed for good properties of both LASSO and MCP.We develop an efficient algorithm to compute the proposed estimator by combining a concave convex procedure and coordinate descent algorithm. Numerical studies show that the proposed method has better selection and prediction performances compared to other penalized methods.

Variable selection in Poisson HGLMs using h-likelihoood

  • Ha, Il Do;Cho, Geon-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.6
    • /
    • pp.1513-1521
    • /
    • 2015
  • Selecting relevant variables for a statistical model is very important in regression analysis. Recently, variable selection methods using a penalized likelihood have been widely studied in various regression models. The main advantage of these methods is that they select important variables and estimate the regression coefficients of the covariates, simultaneously. In this paper, we propose a simple procedure based on a penalized h-likelihood (HL) for variable selection in Poisson hierarchical generalized linear models (HGLMs) for correlated count data. For this we consider three penalty functions (LASSO, SCAD and HL), and derive the corresponding variable-selection procedures. The proposed method is illustrated using a practical example.

Pruning the Boosting Ensemble of Decision Trees

  • Yoon, Young-Joo;Song, Moon-Sup
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.2
    • /
    • pp.449-466
    • /
    • 2006
  • We propose to use variable selection methods based on penalized regression for pruning decision tree ensembles. Pruning methods based on LASSO and SCAD are compared with the cluster pruning method. Comparative studies are performed on some artificial datasets and real datasets. According to the results of comparative studies, the proposed methods based on penalized regression reduce the size of boosting ensembles without decreasing accuracy significantly and have better performance than the cluster pruning method. In terms of classification noise, the proposed pruning methods can mitigate the weakness of AdaBoost to some degree.

Penalized variable selection for accelerated failure time models

  • Park, Eunyoung;Ha, Il Do
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.6
    • /
    • pp.591-604
    • /
    • 2018
  • The accelerated failure time (AFT) model is a linear model under the log-transformation of survival time that has been introduced as a useful alternative to the proportional hazards (PH) model. In this paper we propose variable-selection procedures of fixed effects in a parametric AFT model using penalized likelihood approaches. We use three popular penalty functions, least absolute shrinkage and selection operator (LASSO), adaptive LASSO and smoothly clipped absolute deviation (SCAD). With these procedures we can select important variables and estimate the fixed effects at the same time. The performance of the proposed method is evaluated using simulation studies, including the investigation of impact of misspecifying the assumed distribution. The proposed method is illustrated with a primary biliary cirrhosis (PBC) data set.

High-dimensional linear discriminant analysis with moderately clipped LASSO

  • Chang, Jaeho;Moon, Haeseong;Kwon, Sunghoon
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.1
    • /
    • pp.21-37
    • /
    • 2021
  • There is a direct connection between linear discriminant analysis (LDA) and linear regression since the direction vector of the LDA can be obtained by the least square estimation. The connection motivates the penalized LDA when the model is high-dimensional where the number of predictive variables is larger than the sample size. In this paper, we study the penalized LDA for a class of penalties, called the moderately clipped LASSO (MCL), which interpolates between the least absolute shrinkage and selection operator (LASSO) and minimax concave penalty. We prove that the MCL penalized LDA correctly identifies the sparsity of the Bayes direction vector with probability tending to one, which is supported by better finite sample performance than LASSO based on concrete numerical studies.

Penalized maximum likelihood estimation with symmetric log-concave errors and LASSO penalty

  • Seo-Young, Park;Sunyul, Kim;Byungtae, Seo
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.6
    • /
    • pp.641-653
    • /
    • 2022
  • Penalized least squares methods are important tools to simultaneously select variables and estimate parameters in linear regression. The penalized maximum likelihood can also be used for the same purpose assuming that the error distribution falls in a certain parametric family of distributions. However, the use of a certain parametric family can suffer a misspecification problem which undermines the estimation accuracy. To give sufficient flexibility to the error distribution, we propose to use the symmetric log-concave error distribution with LASSO penalty. A feasible algorithm to estimate both nonparametric and parametric components in the proposed model is provided. Some numerical studies are also presented showing that the proposed method produces more efficient estimators than some existing methods with similar variable selection performance.

Automatic Selection of Optimal Parameter for Baseline Correction using Asymmetrically Reweighted Penalized Least Squares (Asymmetrically Reweighted Penalized Least Squares을 이용한 기준선 보정에서 최적 매개변수 자동 선택 방법)

  • Park, Aaron;Baek, Sung-June;Park, Jun-Qyu;Seo, Yu-Gyung;Won, Yonggwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.124-131
    • /
    • 2016
  • Baseline correction is very important due to influence on performance of spectral analysis in application of spectroscopy. Baseline is often estimated by parameter selection using visual inspection on analyte spectrum. It is a highly subjective procedure and can be tedious work especially with a large number of data. For these reasons, it is an objective and automatic procedure is necessary to select optimal parameter value for baseline correction. Asymmetrically reweighted penalized least squares (arPLS) based on penalized least squares was proposed for baseline correction in our previous study. The method uses a new weighting scheme based on the generalized logistic function. In this study, we present an automatic selection of optimal parameter for baseline correction using arPLS. The method computes fitness and smoothness values of fitted baseline within available range of parameters and then selects optimal parameter when the sum of normalized fitness and smoothness gets minimum. According to the experimental results using simulated data with varying baselines, sloping, curved and doubly curved baseline, and real Raman spectra, we confirmed that the proposed method can be effectively applied to optimal parameter selection for baseline correction using arPLS.

Choosing the Tuning Constant by Laplace Approximation

  • Ahn, Sung-Mahn;Kwon, Suhn-Beom
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.4
    • /
    • pp.597-605
    • /
    • 2012
  • Evidence framework enables us to determine the tuning constant in a penalized likelihood formula. We apply the framework to the estimating parameters of normal mixtures. Evidence, which is a solely data-dependent measure, can be evaluated by Laplace approximation. According to a synthetic data simulation, we found that the proper values of the tuning constant can be systematically obtained.

Semiparametric Bayesian Estimation under Structural Measurement Error Model

  • Hwang, Jin-Seub;Kim, Dal-Ho
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.4
    • /
    • pp.551-560
    • /
    • 2010
  • This paper considers a Bayesian approach to modeling a flexible regression function under structural measurement error model. The regression function is modeled based on semiparametric regression with penalized splines. Model fitting and parameter estimation are carried out in a hierarchical Bayesian framework using Markov chain Monte Carlo methodology. Their performances are compared with those of the estimators under structural measurement error model without a semiparametric component.