• Title/Summary/Keyword: Statistics Adaptive Linear Regression

Search Result 11, Processing Time 0.024 seconds

Object Size Prediction based on Statistics Adaptive Linear Regression for Object Detection (객체 검출을 위한 통계치 적응적인 선형 회귀 기반 객체 크기 예측)

  • Kwon, Yonghye;Lee, Jongseok;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.26 no.2
    • /
    • pp.184-196
    • /
    • 2021
  • This paper proposes statistics adaptive linear regression-based object size prediction method for object detection. YOLOv2 and YOLOv3, which are typical deep learning-based object detection algorithms, designed the last layer of a network using statistics adaptive exponential regression model to predict the size of objects. However, an exponential regression model can propagate a high derivative of a loss function into all parameters in a network because of the property of an exponential function. We propose statistics adaptive linear regression layer to ease the gradient exploding problem of the exponential regression model. The proposed statistics adaptive linear regression model is used in the last layer of the network to predict the size of objects with statistics estimated from training dataset. We newly designed the network based on the YOLOv3tiny and it shows the higher performance compared to YOLOv3 tiny on the UFPR-ALPR dataset.

Comparison of Lasso Type Estimators for High-Dimensional Data

  • Kim, Jaehee
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.4
    • /
    • pp.349-361
    • /
    • 2014
  • This paper compares of lasso type estimators in various high-dimensional data situations with sparse parameters. Lasso, adaptive lasso, fused lasso and elastic net as lasso type estimators and ridge estimator are compared via simulation in linear models with correlated and uncorrelated covariates and binary regression models with correlated covariates and discrete covariates. Each method is shown to have advantages with different penalty conditions according to sparsity patterns of regression parameters. We applied the lasso type methods to Arabidopsis microarray gene expression data to find the strongly significant genes to distinguish two groups.

Bootstrap Bandwidth Selection Methods for Local Linear Jump Detector

  • Park, Dong-Ryeon
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.4
    • /
    • pp.579-590
    • /
    • 2012
  • Local linear jump detection in a discontinuous regression function involves the choice of the bandwidth and the performance of a local linear jump detector depends heavily on the choice of the bandwidth. However, little attention has been paid to this important issue. In this paper we propose two fully data adaptive bandwidth selection methods for a local linear jump detector. The performance of the proposed methods are investigated through a simulation study.

Efficient estimation and variable selection for partially linear single-index-coefficient regression models

  • Kim, Young-Ju
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.1
    • /
    • pp.69-78
    • /
    • 2019
  • A structured model with both single-index and varying coefficients is a powerful tool in modeling high dimensional data. It has been widely used because the single-index can overcome the curse of dimensionality and varying coefficients can allow nonlinear interaction effects in the model. For high dimensional index vectors, variable selection becomes an important question in the model building process. In this paper, we propose an efficient estimation and a variable selection method based on a smoothing spline approach in a partially linear single-index-coefficient regression model. We also propose an efficient algorithm for simultaneously estimating the coefficient functions in a data-adaptive lower-dimensional approximation space and selecting significant variables in the index with the adaptive LASSO penalty. The empirical performance of the proposed method is illustrated with simulated and real data examples.

Efficient Score Estimation and Adaptive Rank and M-estimators from Left-Truncated and Right-Censored Data

  • Chul-Ki Kim
    • Communications for Statistical Applications and Methods
    • /
    • v.3 no.3
    • /
    • pp.113-123
    • /
    • 1996
  • Data-dependent (adaptive) choice of asymptotically efficient score functions for rank estimators and M-estimators of regression parameters in a linear regression model with left-truncated and right-censored data are developed herein. The locally adaptive smoothing techniques of Muller and Wang (1990) and Uzunogullari and Wang (1992) provide good estimates of the hazard function h and its derivative h' from left-truncated and right-censored data. However, since we need to estimate h'/h for the asymptotically optimal choice of score functions, the naive estimator, which is just a ratio of estimated h' and h, turns out to have a few drawbacks. An altermative method to overcome these shortcomings and also to speed up the algorithms is developed. In particular, we use a subroutine of the PPR (Projection Pursuit Regression) method coded by Friedman and Stuetzle (1981) to find the nonparametric derivative of log(h) for the problem of estimating h'/h.

  • PDF

Maximum likelihood estimation of Logistic random effects model (로지스틱 임의선형 혼합모형의 최대우도 추정법)

  • Kim, Minah;Kyung, Minjung
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.6
    • /
    • pp.957-981
    • /
    • 2017
  • A generalized linear mixed model is an extension of a generalized linear model that allows random effect as well as provides flexibility in developing a suitable model when observations are correlated or when there are other underlying phenomena that contribute to resulting variability. We describe maximum likelihood estimation methods for logistic regression models that include random effects - the Laplace approximation, Gauss-Hermite quadrature, adaptive Gauss-Hermite quadrature, and pseudo-likelihood. Applications are provided with social science problems by analyzing the effect of mental health and life satisfaction on volunteer activities from Korean welfare panel data; in addition, we observe that the inclusion of random effects in the model leads to improved analyses with more reasonable inferences.

Two-Stage Penalized Composite Quantile Regression with Grouped Variables

  • Bang, Sungwan;Jhun, Myoungshic
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.4
    • /
    • pp.259-270
    • /
    • 2013
  • This paper considers a penalized composite quantile regression (CQR) that performs a variable selection in the linear model with grouped variables. An adaptive sup-norm penalized CQR (ASCQR) is proposed to select variables in a grouped manner; in addition, the consistency and oracle property of the resulting estimator are also derived under some regularity conditions. To improve the efficiency of estimation and variable selection, this paper suggests the two-stage penalized CQR (TSCQR), which uses the ASCQR to select relevant groups in the first stage and the adaptive lasso penalized CQR to select important variables in the second stage. Simulation studies are conducted to illustrate the finite sample performance of the proposed methods.

Exploring modern machine learning methods to improve causal-effect estimation

  • Kim, Yeji;Choi, Taehwa;Choi, Sangbum
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.2
    • /
    • pp.177-191
    • /
    • 2022
  • This paper addresses the use of machine learning methods for causal estimation of treatment effects from observational data. Even though conducting randomized experimental trials is a gold standard to reveal potential causal relationships, observational study is another rich source for investigation of exposure effects, for example, in the research of comparative effectiveness and safety of treatments, where the causal effect can be identified if covariates contain all confounding variables. In this context, statistical regression models for the expected outcome and the probability of treatment are often imposed, which can be combined in a clever way to yield more efficient and robust causal estimators. Recently, targeted maximum likelihood estimation and causal random forest is proposed and extensively studied for the use of data-adaptive regression in estimation of causal inference parameters. Machine learning methods are a natural choice in these settings to improve the quality of the final estimate of the treatment effect. We explore how we can adapt the design and training of several machine learning algorithms for causal inference and study their finite-sample performance through simulation experiments under various scenarios. Application to the percutaneous coronary intervention (PCI) data shows that these adaptations can improve simple linear regression-based methods.

Penalized least distance estimator in the multivariate regression model (다변량 선형회귀모형의 벌점화 최소거리추정에 관한 연구)

  • Jungmin Shin;Jongkyeong Kang;Sungwan Bang
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.1
    • /
    • pp.1-12
    • /
    • 2024
  • In many real-world data, multiple response variables are often dependent on the same set of explanatory variables. In particular, if several response variables are correlated with each other, simultaneous estimation considering the correlation between response variables might be more effective way than individual analysis by each response variable. In this multivariate regression analysis, least distance estimator (LDE) can estimate the regression coefficients simultaneously to minimize the distance between each training data and the estimates in a multidimensional Euclidean space. It provides a robustness for the outliers as well. In this paper, we examine the least distance estimation method in multivariate linear regression analysis, and furthermore, we present the penalized least distance estimator (PLDE) for efficient variable selection. The LDE technique applied with the adaptive group LASSO penalty term (AGLDE) is proposed in this study which can reflect the correlation between response variables in the model and can efficiently select variables according to the importance of explanatory variables. The validity of the proposed method was confirmed through simulations and real data analysis.

Penalized variable selection in mean-variance accelerated failure time models (평균-분산 가속화 실패시간 모형에서 벌점화 변수선택)

  • Kwon, Ji Hoon;Ha, Il Do
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.411-425
    • /
    • 2021
  • Accelerated failure time (AFT) model represents a linear relationship between the log-survival time and covariates. We are interested in the inference of covariate's effect affecting the variation of survival times in the AFT model. Thus, we need to model the variance as well as the mean of survival times. We call the resulting model mean and variance AFT (MV-AFT) model. In this paper, we propose a variable selection procedure of regression parameters of mean and variance in MV-AFT model using penalized likelihood function. For the variable selection, we study four penalty functions, i.e. least absolute shrinkage and selection operator (LASSO), adaptive lasso (ALASSO), smoothly clipped absolute deviation (SCAD) and hierarchical likelihood (HL). With this procedure we can select important covariates and estimate the regression parameters at the same time. The performance of the proposed method is evaluated using simulation studies. The proposed method is illustrated with a clinical example dataset.