• Title/Summary/Keyword: local linear estimator

Search Result 26, Processing Time 0.017 seconds

Sparse Design Problem in Local Linear Quasi-likelihood Estimator (국소선형 준가능도 추정량의 자료 희박성 문제 해결방안)

  • Park, Dong-Ryeon
    • The Korean Journal of Applied Statistics
    • /
    • v.20 no.1
    • /
    • pp.133-145
    • /
    • 2007
  • Local linear estimator has a number of advantages over the traditional kernel estimators. The better performance near boundaries is one of them. However, local linear estimator can produce erratic result in sparse regions in the realization of the design and to solve this problem much research has been done. Local linear quasi-likelihood estimator has many common properties with local linear estimator, and it turns out that sparse design can also lead local linear quasi-likelihood estimator to erratic behavior in practice. Several methods to solve this problem are proposed and their finite sample properties are compared by the simulation study.

The local influence of LIU type estimator in linear mixed model

  • Zhang, Lili;Baek, Jangsun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.2
    • /
    • pp.465-474
    • /
    • 2015
  • In this paper, we study the local influence analysis of LIU type estimator in the linear mixed models. Using the method proposed by Shi (1997), the local influence of LIU type estimator in three disturbance models are investigated respectively. Furthermore, we give the generalized Cook's distance to assess the influence, and illustrate the efficiency of the proposed method by example.

Shifted Nadaraya Watson Estimator

  • Chung, Sung-S.
    • Communications for Statistical Applications and Methods
    • /
    • v.4 no.3
    • /
    • pp.881-890
    • /
    • 1997
  • The local linear estimator usually has more attractive properties than Nadaraya-Watson estimator. But the local linear estimator gives bad performance where data are sparse. Muller and Song proposed Shifted Nadaraya Watson estimator which has treated data sparsity well. We show that Shifted Nadaraya Watson estimator has good performance not only in the sparse region but also in the dense region, through the simulation study. Ans we suggest the boundary treatment of Shifted Nadaraya Watson estimator.

  • PDF

Modified Local Density Estimation for the Log-Linear Density

  • Pak, Ro-Jin
    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.1
    • /
    • pp.13-22
    • /
    • 2000
  • We consider local likelihood method with a smoothed version of the model density in stead of an original model density. For simplicity a model is assumed as the log-linear density then we were able to show that the proposed local density estimator is less affected by changes among observations but its bias increases little bit more than that of the currently used local density estimator. Hence if we use the existing method and the proposed method in a proper way we would derive the local density estimator fitting the data in a better way.

  • PDF

Local Influence of the Quasi-likelihood Estimators in Generalized Linear Models

  • Jung, Kang-Mo
    • Communications for Statistical Applications and Methods
    • /
    • v.14 no.1
    • /
    • pp.229-239
    • /
    • 2007
  • We present a diagnostic method for the quasi-likelihood estimators in generalized linear models. Since these estimators can be usually obtained by iteratively reweighted least squares which are well known to be very sensitive to unusual data, a diagnostic step is indispensable to analysis of data. We extend the local influence approach based on the maximum likelihood function to that on the quasi-likelihood function. Under several perturbation schemes local influence diagnostics are derived. An illustrative example is given and we compare the results provided by local influence and deletion.

A Local Linear Kernel Estimator for Sparse Multinomial Data

  • Baek, Jangsun
    • Journal of the Korean Statistical Society
    • /
    • v.27 no.4
    • /
    • pp.515-529
    • /
    • 1998
  • Burman (1987) and Hall and Titterington (1987) studied kernel smoothing for sparse multinomial data in detail. Both of their estimators for cell probabilities are sparse asymptotic consistent under some restrictive conditions on the true cell probabilities. Dong and Simonoff (1994) adopted boundary kernels to relieve the restrictive conditions. We propose a local linear kernel estimator which is popular in nonparametric regression to estimate cell probabilities. No boundary adjustment is necessary for this estimator since it adapts automatically to estimation at the boundaries. It is shown that our estimator attains the optimal rate of convergence in mean sum of squared error under sparseness. Some simulation results and a real data application are presented to see the performance of the estimator.

  • PDF

ON MARGINAL INTEGRATION METHOD IN NONPARAMETRIC REGRESSION

  • Lee, Young-Kyung
    • Journal of the Korean Statistical Society
    • /
    • v.33 no.4
    • /
    • pp.435-447
    • /
    • 2004
  • In additive nonparametric regression, Linton and Nielsen (1995) showed that the marginal integration when applied to the local linear smoother produces a rate-optimal estimator of each univariate component function for the case where the dimension of the predictor is two. In this paper we give new formulas for the bias and variance of the marginal integration regression estimators which are valid for boundary areas as well as fixed interior points, and show the local linear marginal integration estimator is in fact rate-optimal when the dimension of the predictor is less than or equal to four. We extend the results to the case of the local polynomial smoother, too.

Penalized rank regression estimator with the smoothly clipped absolute deviation function

  • Park, Jong-Tae;Jung, Kang-Mo
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.6
    • /
    • pp.673-683
    • /
    • 2017
  • The least absolute shrinkage and selection operator (LASSO) has been a popular regression estimator with simultaneous variable selection. However, LASSO does not have the oracle property and its robust version is needed in the case of heavy-tailed errors or serious outliers. We propose a robust penalized regression estimator which provide a simultaneous variable selection and estimator. It is based on the rank regression and the non-convex penalty function, the smoothly clipped absolute deviation (SCAD) function which has the oracle property. The proposed method combines the robustness of the rank regression and the oracle property of the SCAD penalty. We develop an efficient algorithm to compute the proposed estimator that includes a SCAD estimate based on the local linear approximation and the tuning parameter of the penalty function. Our estimate can be obtained by the least absolute deviation method. We used an optimal tuning parameter based on the Bayesian information criterion and the cross validation method. Numerical simulation shows that the proposed estimator is robust and effective to analyze contaminated data.

Multiple Structural Change-Point Estimation in Linear Regression Models

  • Kim, Jae-Hee
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.3
    • /
    • pp.423-432
    • /
    • 2012
  • This paper is concerned with the detection of multiple change-points in linear regression models. The proposed procedure relies on the local estimation for global change-point estimation. We propose a multiple change-point estimator based on the local least squares estimators for the regression coefficients and the split measure when the number of change-points is unknown. Its statistical properties are shown and its performance is assessed by simulations and real data applications.

On Convex Combination of Local Constant Regression

  • Mun, Jung-Won;Kim, Choong-Rak
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.2
    • /
    • pp.379-387
    • /
    • 2006
  • Local polynomial regression is widely used because of good properties such as such as the adaptation to various types of designs, the absence of boundary effects and minimax efficiency Choi and Hall (1998) proposed an estimator of regression function using a convex combination idea. They showed that a convex combination of three local linear estimators produces an estimator which has the same order of bias as a local cubic smoother. In this paper we suggest another estimator of regression function based on a convex combination of five local constant estimates. It turned out that this estimator has the same order of bias as a local cubic smoother.