• Title/Summary/Keyword: L-Statistics

Search Result 626, Processing Time 0.02 seconds

Asymptotically Efficient L-Estimation for Regression Slope When Trimming is Given (절사가 주어질때 회귀기울기의 점근적 최량 L-추정법)

  • Sang Moon Han
    • The Korean Journal of Applied Statistics
    • /
    • v.7 no.2
    • /
    • pp.173-182
    • /
    • 1994
  • By applying slope estimator under the arbitrary error distributions proposed by Han(1993), if we define regression quantiles to give upper and lower trimming part and blocks of data, we show the proposed slope estimator has asymptotically efficient slope estimator when the number of regression quantiles to from blocks of data goes to sufficiently large.

  • PDF

A PRODUCT FORMULA FOR LOCALIZATION OPERATORS

  • Du, Jing-De;Wong, M.M.
    • Bulletin of the Korean Mathematical Society
    • /
    • v.37 no.1
    • /
    • pp.77-84
    • /
    • 2000
  • The product of two localization operators with symbols F and G in some subspace of $L^2(C^n)$ is shown to be a localization operator with symbol in $L^2(C^n)$ and a formula for the symbol of the product in terms of F and G is given.

  • PDF

The Doubly Regularized Quantile Regression

  • Choi, Ho-Sik;Kim, Yong-Dai
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.5
    • /
    • pp.753-764
    • /
    • 2008
  • The $L_1$ regularized estimator in quantile problems conduct parameter estimation and model selection simultaneously and have been shown to enjoy nice performance. However, $L_1$ regularized estimator has a drawback: when there are several highly correlated variables, it tends to pick only a few of them. To make up for it, the proposed method adopts doubly regularized framework with the mixture of $L_1$ and $L_2$ norms. As a result, the proposed method can select significant variables and encourage the highly correlated variables to be selected together. One of the most appealing features of the new algorithm is to construct the entire solution path of doubly regularized quantile estimator. From simulations and real data analysis, we investigate its performance.

Robust varying coefficient model using L1 regularization

  • Hwang, Changha;Bae, Jongsik;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.4
    • /
    • pp.1059-1066
    • /
    • 2016
  • In this paper we propose a robust version of varying coefficient models, which is based on the regularized regression with L1 regularization. We use the iteratively reweighted least squares procedure to solve L1 regularized objective function of varying coefficient model in locally weighted regression form. It provides the efficient computation of coefficient function estimates and the variable selection for given value of smoothing variable. We present the generalized cross validation function and Akaike information type criterion for the model selection. Applications of the proposed model are illustrated through the artificial examples and the real example of predicting the effect of the input variables and the smoothing variable on the output.

L1-penalized AUC-optimization with a surrogate loss

  • Hyungwoo Kim;Seung Jun Shin
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.203-212
    • /
    • 2024
  • The area under the ROC curve (AUC) is one of the most common criteria used to measure the overall performance of binary classifiers for a wide range of machine learning problems. In this article, we propose a L1-penalized AUC-optimization classifier that directly maximizes the AUC for high-dimensional data. Toward this, we employ the AUC-consistent surrogate loss function and combine the L1-norm penalty which enables us to estimate coefficients and select informative variables simultaneously. In addition, we develop an efficient optimization algorithm by adopting k-means clustering and proximal gradient descent which enjoys computational advantages to obtain solutions for the proposed method. Numerical simulation studies demonstrate that the proposed method shows promising performance in terms of prediction accuracy, variable selectivity, and computational costs.

A comparison study of various robust regression estimators using simulation (시뮬레이션을 통한 다양한 로버스트 회귀추정량의 비교 연구)

  • Jang, Soohee;Yoon, Jungyeon;Chun, Heuiju
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.3
    • /
    • pp.471-485
    • /
    • 2016
  • Least squares (LS) regression is a classic method for regression that is optimal under assumptions of regression and usual observations. However, the presence of unusual data in the LS method leads to seriously distorted estimates. Therefore, various robust estimation methods are proposed to circumvent the limitations of traditional LS regression. Among these, there are M-estimators based on maximum likelihood estimation (MLE), L-estimators based on linear combinations of order statistics and R-estimators based on a linear combinations of the ordered residuals. In this paper, robust regression estimators with high breakdown point and/or with high efficiency are compared under several simulated situations. The paper analyses and compares distributions of estimates as well as relative efficiencies calculated from mean squared errors (MSE) in the simulation study. We conclude that MM-estimators or GR-estimators are a good choice for the real data application.

Prediction of recent earthquake magnitudes of Gyeongju and Pohang using historical earthquake data of the Chosun Dynasty (조선시대 역사지진자료를 이용한 경주와 포항의 최근 지진규모 예측)

  • Kim, Jun Cheol;Kwon, Sookhee;Jang, Dae-Heung;Rhee, Kun Woo;Kim, Young-Seog;Ha, Il Do
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.1
    • /
    • pp.119-129
    • /
    • 2022
  • In this paper, we predict the earthquake magnitudes which were recently occurred in Gyeongju and Pohang, using statistical methods based on historical data. For this purpose, we use the five-year block maximum data of 1392~1771 period, which has a relatively high annual density, among the historical earthquake magnitude data of the Chosun Dynasty. Then, we present the prediction and analysis of earthquake magnitudes for the return level over return period in the Chosun Dynasty using the extreme value theory based on the distribution of generalized extreme values (GEV). We use maximum likelihood estimation (MLE) and L-moments estimation for parameters of GEV distribution. In particular, this study also demonstrates via the goodness-of-fit tests that the GEV distribution can be an appropriate analytical model for these historical earthquake magnitude data.

PSEUDO-CHEBYSHEV SUBSPACES IN $L^1$

  • Mohebi, H.
    • Journal of applied mathematics & informatics
    • /
    • v.7 no.2
    • /
    • pp.585-595
    • /
    • 2000
  • We give various characterizations of pseudo -Chebyshev Subspaces in the spaces $L^1$(S,${\mu}$) and C(T).

Existence theorems of an operator-valued feynman integral as an $L(L_1,C_0)$ theory

  • Ahn, Jae-Moon;Chang, Kun-Soo;Kim, Jeong-Gyoo;Ko, Jung-Won;Ryu, Kun-Sik
    • Bulletin of the Korean Mathematical Society
    • /
    • v.34 no.2
    • /
    • pp.317-334
    • /
    • 1997
  • The existence of an operator-valued function space integral as an operator on $L_p(R) (1 \leq p \leq 2)$ was established for certain functionals which involved the Labesgue measure [1,2,6,7]. Johnson and Lapidus showed the existence of the integral as an operator on $L_2(R)$ for certain functionals which involved any Borel measures [5]. J. S. Chang and Johnson proved the existence of the integral as an operator from L_1(R)$ to $C_0(R)$ for certain functionals involving some Borel measures [3]. K. S. Chang and K. S. Ryu showed the existence of the integral as an operator from $L_p(R) to L_p'(R)$ for certain functionals involving some Borel measures [4].

  • PDF

To study of optimal subgroup size for estimating variance on autocorrelated small samples (소표본 자기상관 자료의 분산 추정을 위한 최적 부분군 크기에 대한 연구)

  • Lee, Jong-Seon;Lee, Jae-Jun;Bae, Soon-Hee
    • Proceedings of the Korean Society for Quality Management Conference
    • /
    • 2007.04a
    • /
    • pp.302-309
    • /
    • 2007
  • To conduct statistical process control needs the assumption that the process data are independent. However, most of chemical processes, like a semi-conduct processes do not satisfy the assumption because of autocorrelation. It causes abnormal out of control signal in the process control and misleading process capability. In this study, we introduce that Shore's method to solve the problem and to find the optimal subgroup size to estimate variance for AR(l) model. Especially, we focus on finding an actual subgroup size for small samples using simulation. It may be very useful for statistical process control to analyze process capability and to make a Shewhart chart properly.

  • PDF