• Title/Summary/Keyword: $L_1$-estimator

Search Result 57, Processing Time 0.019 seconds

Adaptive L-estimation for regression slope under asymmetric error distributions (비대칭 오차모형하에서의 회귀기울기에 대한 적합된 L-추정법)

  • 한상문
    • The Korean Journal of Applied Statistics
    • /
    • v.6 no.1
    • /
    • pp.79-93
    • /
    • 1993
  • We consider adaptive L-estimation of estimating slope parameter in regression model. The proposed estimator is simple extension of trimmed least squares estimator proposed by ruppert and carroll. The efficiency of the proposed estimator is especially well compared with usual least squares estimator, least absolute value estimator, and M-estimators designed for asymmetric distributions under asymmetric error distributions.

  • PDF

Nonparametric Estimation in Regression Model

  • Han, Sang Moon
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.1
    • /
    • pp.15-27
    • /
    • 2001
  • One proposal is made for constructing nonparametric estimator of slope parameters in a regression model under symmetric error distributions. This estimator is based on the use of idea of Johns for estimating the center of the symmetric distribution together with the idea of regression quantiles and regression trimmed mean. This nonparametric estimator and some other L-estimators are studied by Monte Carlo.

  • PDF

On the Robustness of $L_1$-estimator in Linear Regression Models

  • Bu-Yong Kim
    • Communications for Statistical Applications and Methods
    • /
    • v.2 no.2
    • /
    • pp.277-287
    • /
    • 1995
  • It is well kmown that the $L_1$-estimator is robust with respect to vertical outliers in regression data, even if it is susceptible to bad leverage points. This article is concerned with the robustness of the $L_1$-estimator. To investigate its robustness against vertical outliers we may find intervals for the value of the response variable within which the $L_1$-estimates do not shange. A procedure for constructing those intervals in multiple limear regression is illustrated in the sensitivity analysis context. And then vertical breakdown point of the $L_1$-estimator is defined on the basis of properties related to those intervals.

  • PDF

L-Estimation for the Parameter of the AR(l) Model (AR(1) 모형의 모수에 대한 L-추정법)

  • Han Sang Moon;Jung Byoung Cheal
    • The Korean Journal of Applied Statistics
    • /
    • v.18 no.1
    • /
    • pp.43-56
    • /
    • 2005
  • In this study, a robust estimation method for the first-order autocorrelation coefficient in the time series model following AR(l) process with additive outlier(AO) is investigated. We propose the L-type trimmed least squares estimation method using the preliminary estimator (PE) suggested by Rupport and Carroll (1980) in multiple regression model. In addition, using Mallows' weight function in order to down-weight the outlier of X-axis, the bounded-influence PE (BIPE) estimator is obtained and the mean squared error (MSE) performance of various estimators for autocorrelation coefficient are compared using Monte Carlo experiments. From the results of Monte-Carlo study, the efficiency of BIPE(LAD) estimator using the generalized-LAD to preliminary estimator performs well relative to other estimators.

A Robust Estimation Procedure for the Linear Regression Model

  • Kim, Bu-Yong
    • Journal of the Korean Statistical Society
    • /
    • v.16 no.2
    • /
    • pp.80-91
    • /
    • 1987
  • Minimum $L_i$ norm estimation is a robust procedure ins the sense that it leads to an estimator which has greater statistical eficiency than the least squares estimator in the presence of outliers. And the $L_1$ norm estimator has some desirable statistical properties. In this paper a new computational procedure for $L_1$ norm estimation is proposed which combines the idea of reweighted least squares method and the linear programming approach. A modification of the projective transformation method is employed to solve the linear programming problem instead of the simplex method. It is proved that the proposed algorithm terminates in a finite number of iterations.

  • PDF

A Note On L$_1$ Strongly Consistent Wavelet Density Estimator for the Deconvolution Problems

  • Lee, Sungho
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.3
    • /
    • pp.859-866
    • /
    • 2001
  • The problem of wavelet density estimation is studied when the sample observations are contaminated with random noise. In this paper a linear wavelet estimator based on Meyer-type wavelets is shown to be L$_1$ strongly consistent for f(x) with bounded support when Fourier transform of random noise has polynomial descent or exponential descent.

  • PDF

The Doubly Regularized Quantile Regression

  • Choi, Ho-Sik;Kim, Yong-Dai
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.5
    • /
    • pp.753-764
    • /
    • 2008
  • The $L_1$ regularized estimator in quantile problems conduct parameter estimation and model selection simultaneously and have been shown to enjoy nice performance. However, $L_1$ regularized estimator has a drawback: when there are several highly correlated variables, it tends to pick only a few of them. To make up for it, the proposed method adopts doubly regularized framework with the mixture of $L_1$ and $L_2$ norms. As a result, the proposed method can select significant variables and encourage the highly correlated variables to be selected together. One of the most appealing features of the new algorithm is to construct the entire solution path of doubly regularized quantile estimator. From simulations and real data analysis, we investigate its performance.

On the Bivariate Dichotomous Choice Model

  • Kim, Hea-Jung
    • Journal of the Korean Statistical Society
    • /
    • v.14 no.1
    • /
    • pp.7-17
    • /
    • 1985
  • Data set generated by teh bivariate dichotomous choice made by individuals often occurs in practice. This paper presents general model of how such data set is generated as well as methods of estimation. The M.L.E. is examined and found to be computationally burdensome. A simpler estimator, the bivariate dichotomous two-stage estimator, is suggested as an alternative. The two-stage estimator is found to be as efficient as the M.L.E.

  • PDF

Convergence Properties of a Spectral Density Estimator

  • Gyeong Hye Shin;Hae Kyung Kim
    • Communications for Statistical Applications and Methods
    • /
    • v.3 no.3
    • /
    • pp.271-282
    • /
    • 1996
  • this paper deal with the estimation of the power spectral density function of time series. A kernel estimator which is based on local average is defined and the rates of convergence of the pointwise, $$L_2$-norm; and; $L{\infty}$-norm associated with the estimator are investigated by restricting as to kernels with suitable assumptions. Under appropriate regularity conditions, it is shown that the optimal rate of convergence for 0$N^{-r}$ both in the pointwiseand $$L_2$-norm, while; $N^{r-1}(logN)^{-r}$is the optimal rate in the $L{\infty}-norm$. Some examples are given to illustrate the application of main results.

  • PDF

The Minimum Squared Distance Estimator and the Minimum Density Power Divergence Estimator

  • Pak, Ro-Jin
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.6
    • /
    • pp.989-995
    • /
    • 2009
  • Basu et al. (1998) proposed the minimum divergence estimating method which is free from using the painful kernel density estimator. Their proposed class of density power divergences is indexed by a single parameter $\alpha$ which controls the trade-off between robustness and efficiency. In this article, (1) we introduce a new large class the minimum squared distance which includes from the minimum Hellinger distance to the minimum $L_2$ distance. We also show that under certain conditions both the minimum density power divergence estimator(MDPDE) and the minimum squared distance estimator(MSDE) are asymptotically equivalent and (2) in finite samples the MDPDE performs better than the MSDE in general but there are some cases where the MSDE performs better than the MDPDE when estimating a location parameter or a proportion of mixed distributions.