• Title/Summary/Keyword: heteroscedastic regression

Search Result 19, Processing Time 0.023 seconds

A study on robust regression estimators in heteroscedastic error models

  • Son, Nayeong;Kim, Mijeong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.5
    • /
    • pp.1191-1204
    • /
    • 2017
  • Weighted least squares (WLS) estimation is often easily used for the data with heteroscedastic errors because it is intuitive and computationally inexpensive. However, WLS estimator is less robust to a few outliers and sometimes it may be inefficient. In order to overcome robustness problems, Box-Cox transformation, Huber's M estimation, bisquare estimation, and Yohai's MM estimation have been proposed. Also, more efficient estimations than WLS have been suggested such as Bayesian methods (Cepeda and Achcar, 2009) and semiparametric methods (Kim and Ma, 2012) in heteroscedastic error models. Recently, Çelik (2015) proposed the weight methods applicable to the heteroscedasticity patterns including butterfly-distributed residuals and megaphone-shaped residuals. In this paper, we review heteroscedastic regression estimators related to robust or efficient estimation and describe their properties. Also, we analyze cost data of U.S. Electricity Producers in 1955 using the methods discussed in the paper.

Weighted Support Vector Machines for Heteroscedastic Regression

  • Park, Hye-Jung;Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.2
    • /
    • pp.467-474
    • /
    • 2006
  • In this paper we present a weighted support vector machine(SVM) and a weighted least squares support vector machine(LS-SVM) for the prediction in the heteroscedastic regression model. By adding weights to standard SVM and LS-SVM the better fitting ability can be achieved when errors are heteroscedastic. In the numerical studies, we illustrate the prediction performance of the proposed procedure by comparing with the procedure which combines standard SVM and LS-SVM and wild bootstrap for the prediction.

  • PDF

PRELIMINARY DETECTION FOR ARCH-TYPE HETEROSCEDASTICITY IN A NONPARAMETRIC TIME SERIES REGRESSION MODEL

  • HWANG S. Y.;PARK CHEOLYONG;KIM TAE YOON;PARK BYEONG U.;LEE Y. K.
    • Journal of the Korean Statistical Society
    • /
    • v.34 no.2
    • /
    • pp.161-172
    • /
    • 2005
  • In this paper a nonparametric method is proposed for detecting conditionally heteroscedastic errors in a nonparametric time series regression model where the observation points are equally spaced on [0,1]. It turns out that the first-order sample autocorrelation of the squared residuals from the kernel regression estimates provides essential information. Illustrative simulation study is presented for diverse errors such as ARCH(1), GARCH(1,1) and threshold-ARCH(1) models.

Characteristics of Iλ-optimality Criterion compared to the D- and Heteroscedastic G-optimality with respect to Simple Linear and Quadratic Regression (단순선형회귀와 이차형식회귀모형을 중심으로 D-와 이분산 G-최적에 비교한 Iλ-최적실험기준의 특성연구)

  • Kim, Yeong-Il
    • Journal of Korean Society for Quality Management
    • /
    • v.21 no.2
    • /
    • pp.140-155
    • /
    • 1993
  • The characteristics of $I_{\lambda}$-optimality, one of the linear criteria suggested by Fedorov (1972) are investigated with respect to the D-and heteroscedastic G-optimality in case of non-constant variance function. Though having limited results obtained from simple models, we may conclude that $I_{\lambda}$-optimality is sometimes preferred to the heteroscedastic G-optimality suggested newly bv Wong and Cook (1992) in the sense that the experimenter's belief in weighting function exists in $I_{\lambda}$-optimality criterion, not to mention its computational simplicity.

  • PDF

A kernel machine for estimation of mean and volatility functions

  • Shim, Joo-Yong;Park, Hye-Jung;Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.5
    • /
    • pp.905-912
    • /
    • 2009
  • We propose a doubly penalized kernel machine (DPKM) which uses heteroscedastic location-scale model as basic model and estimates both mean and volatility functions simultaneously by kernel machines. We also present the model selection method which employs the generalized approximate cross validation techniques for choosing the hyperparameters which affect the performance of DPKM. Artificial examples are provided to indicate the usefulness of DPKM for the mean and volatility functions estimation.

  • PDF

Nonparametric Estimation of Discontinuous Variance Function in Regression Model

  • Kang, Kee-Hoon;Huh, Jib
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2002.11a
    • /
    • pp.103-108
    • /
    • 2002
  • We consider an estimation of discontinuous variance function in nonparametric heteroscedastic random design regression model. We first propose estimators of a change point and jump size in variance function and then construct an estimator of entire variance function. We examine the rates of convergence of these estimators and give results on their asymptotics. Numerical work reveals that the effectiveness of change point analysis in variance function estimation is quite significant.

  • PDF

ASYMPTOTIC NORMALITY OF WAVELET ESTIMATOR OF REGRESSION FUNCTION UNDER NA ASSUMPTIONS

  • Liang, Han-Ying;Qi, Yan-Yan
    • Bulletin of the Korean Mathematical Society
    • /
    • v.44 no.2
    • /
    • pp.247-257
    • /
    • 2007
  • Consider the heteroscedastic regression model $Y_i=g(x_i)+{\sigma}_i\;{\epsilon}_i=(1{\leq}i{\leq}n)$, where ${\sigma}^2_i=f(u_i)$, the design points $(x_i,\;u_i)$ are known and nonrandom, and g and f are unknown functions defined on closed interval [0, 1]. Under the random errors $\epsilon_i$ form a sequence of NA random variables, we study the asymptotic normality of wavelet estimators of g when f is a known or unknown function.

Estimation of Interval Censored Regression Spline Model with Variance Function

  • Joo, Yong-Sung;Lee, Keun-Baik;Jung, Hyeng-Joo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.4
    • /
    • pp.1247-1253
    • /
    • 2008
  • In this paper, we propose a interval censored regression spline model with a variance function (non-constant variance that depends on a predictor). Simulation studies show our estimates from MCECM algorithm are consistent, but biased when the sample size is small because of boundary effects. Also, we examined how the distribution of $x_i$ affects the converging speed of these consistent estimates.

  • PDF

A Study on Support Vectors of Least Squares Support Vector Machine

  • Seok, Kyungha;Cho, Daehyun
    • Communications for Statistical Applications and Methods
    • /
    • v.10 no.3
    • /
    • pp.873-878
    • /
    • 2003
  • LS-SVM(Least-Squares Support Vector Machine) has been used as a promising method for regression as well as classification. Suykens et al.(2000) used only the magnitude of residuals to obtain SVs(Support Vectors). Suykens' method behaves well for homogeneous model. But in a heteroscedastic model, the method shows a poor behavior. The present paper proposes a new method to get SVs. The proposed method uses the variance of noise as well as the magnitude of residuals to obtain support vectors. Through the simulation study we justified excellence of our proposed method.

NONPARAMETRIC ESTIMATION OF THE VARIANCE FUNCTION WITH A CHANGE POINT

  • Kang Kee-Hoon;Huh Jib
    • Journal of the Korean Statistical Society
    • /
    • v.35 no.1
    • /
    • pp.1-23
    • /
    • 2006
  • In this paper we consider an estimation of the discontinuous variance function in nonparametric heteroscedastic random design regression model. We first propose estimators of the change point in the variance function and then construct an estimator of the entire variance function. We examine the rates of convergence of these estimators and give results for their asymptotics. Numerical work reveals that using the proposed change point analysis in the variance function estimation is quite effective.