• Title/Summary/Keyword: Quadratic loss

Search Result 118, Processing Time 0.023 seconds

Estimators with Nondecreasing Risk in a Multivariate Normal Distribution

  • Kim, Byung-Hwee;Koh, Tae-Wook;Baek, Hoh-Yoo
    • Journal of the Korean Statistical Society
    • /
    • v.24 no.1
    • /
    • pp.257-266
    • /
    • 1995
  • Consider a p-variate $(p \geq 4)$ normal distribution with mean $\b{\theta}$ and identity covariance matrix. For estimating $\b{\theta}$ under a quadratic loss we investigate the behavior of risks of Stein-type estimators which shrink the usual estimator toward the mean of observations. By using concavity of the function appearing in the shrinkage factor together with new expectation identities for noncentral chi-squared random variables, a characterization of estimators with nondecreasing risk is obtained.

  • PDF

Lindley Type Estimators with the Known Norm

  • Baek, Hoh-Yoo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.11 no.1
    • /
    • pp.37-45
    • /
    • 2000
  • Consider the problem of estimating a $p{\times}1$ mean vector ${\underline{\theta}}(p{\geq}4)$ under the quadratic loss, based on a sample ${\underline{x}_{1}},\;{\cdots}{\underline{x}_{n}}$. We find an optimal decision rule within the class of Lindley type decision rules which shrink the usual one toward the mean of observations when the underlying distribution is that of a variance mixture of normals and when the norm ${\parallel}\;{\underline{\theta}}\;-\;{\bar{\theta}}{\underline{1}}\;{\parallel}$ is known, where ${\bar{\theta}}=(1/p){\sum_{i=1}^p}{\theta}_i$ and $\underline{1}$ is the column vector of ones.

  • PDF

Lindley Type Estimation with Constrains on the Norm

  • Baek, Hoh-Yoo;Han, Kyou-Hwan
    • Honam Mathematical Journal
    • /
    • v.25 no.1
    • /
    • pp.95-115
    • /
    • 2003
  • Consider the problem of estimating a $p{\times}1$ mean vector ${\theta}(p{\geq}4)$ under the quadratic loss, based on a sample $X_1,\;{\cdots}X_n$. We find an optimal decision rule within the class of Lindley type decision rules which shrink the usual one toward the mean of observations when the underlying distribution is that of a variance mixture of normals and when the norm $||{\theta}-{\bar{\theta}}1||$ is known, where ${\bar{\theta}}=(1/p)\sum_{i=1}^p{\theta}_i$ and 1 is the column vector of ones. When the norm is restricted to a known interval, typically no optimal Lindley type rule exists but we characterize a minimal complete class within the class of Lindley type decision rules. We also characterize the subclass of Lindley type decision rules that dominate the sample mean.

  • PDF

Lindley Type Estimators When the Norm is Restricted to an Interval

  • Baek, Hoh-Yoo;Lee, Jeong-Mi
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.4
    • /
    • pp.1027-1039
    • /
    • 2005
  • Consider the problem of estimating a $p{\times}1$ mean vector $\theta(p\geq4)$ under the quadratic loss, based on a sample $X_1$, $X_2$, $\cdots$, $X_n$. We find a Lindley type decision rule which shrinks the usual one toward the mean of observations when the underlying distribution is that of a variance mixture of normals and when the norm $\parallel\;{\theta}-\bar{{\theta}}1\;{\parallel}$ is restricted to a known interval, where $bar{{\theta}}=\frac{1}{p}\;\sum\limits_{i=1}^{p}{\theta}_i$ and 1 is the column vector of ones. In this case, we characterize a minimal complete class within the class of Lindley type decision rules. We also characterize the subclass of Lindley type decision rules that dominate the sample mean.

  • PDF

James-Stein Type Estimators Shrinking towards Projection Vector When the Norm is Restricted to an Interval

  • Baek, Hoh Yoo;Park, Su Hyang
    • Journal of Integrative Natural Science
    • /
    • v.10 no.1
    • /
    • pp.33-39
    • /
    • 2017
  • Consider the problem of estimating a $p{\times}1$ mean vector ${\theta}(p-q{\geq}3)$, $q=rank(P_V)$ with a projection matrix $P_v$ under the quadratic loss, based on a sample $X_1$, $X_2$, ${\cdots}$, $X_n$. We find a James-Stein type decision rule which shrinks towards projection vector when the underlying distribution is that of a variance mixture of normals and when the norm ${\parallel}{\theta}-P_V{\theta}{\parallel}$ is restricted to a known interval, where $P_V$ is an idempotent and projection matrix and rank $(P_V)=q$. In this case, we characterize a minimal complete class within the class of James-Stein type decision rules. We also characterize the subclass of James-Stein type decision rules that dominate the sample mean.

An improvement of estimators for the multinormal mean vector with the known norm

  • Kim, Jaehyun;Baek, Hoh Yoo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.2
    • /
    • pp.435-442
    • /
    • 2017
  • Consider the problem of estimating a $p{\times}1$ mean vector ${\theta}$ (p ${\geq}$ 3) under the quadratic loss from multi-variate normal population. We find a James-Stein type estimator which shrinks towards the projection vectors when the underlying distribution is that of a variance mixture of normals. In this case, the norm ${\parallel}{\theta}-K{\theta}{\parallel}$ is known where K is a projection vector with rank(K) = q. The class of this type estimator is quite general to include the class of the estimators proposed by Merchand and Giri (1993). We can derive the class and obtain the optimal type estimator. Also, this research can be applied to the simple and multiple regression model in the case of rank(K) ${\geq}2$.

Flexible Process Performance Measures by Quadratic Loss Function (이차손실함수를 이용한 유동적인 공정수행척도)

  • 정영배
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.18 no.36
    • /
    • pp.275-285
    • /
    • 1995
  • In recent years there has been increasing interest in the issue of process centering in manufacturing process, The traditional process capability indices Cp, Cpk and Cpu are used to provide measure of process performance, but these indices do not represent the issue of process centering. A new measure of the process capability index Cpm is proposed that takes into account the proximity to the target value as well as the process variation when assessing process performance. However, Cpm only considers acceptance cost for deviation from target value within specification limits, do not includes economic consideration for rejected items. This paper proposes flexible process performance measures that considered quadratic loss function caused by quality deviation within specification limits, rejection cost associated with the disposition of rejected items, and inspection cost. In this model disposition of rejected items are considered under perfect corrective procedures and the absence of perfect corrective procedures.

  • PDF

An approach to improving the James-Stein estimator shrinking towards projection vectors

  • Park, Tae Ryong;Baek, Hoh Yoo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.6
    • /
    • pp.1549-1555
    • /
    • 2014
  • Consider a p-variate normal distribution ($p-q{\geq}3$, q = rank($P_V$) with a projection matrix $P_V$). Using a simple property of noncentral chi square distribution, the generalized Bayes estimators dominating the James-Stein estimator shrinking towards projection vectors under quadratic loss are given based on the methods of Brown, Brewster and Zidek for estimating a normal variance. This result can be extended the cases where covariance matrix is completely unknown or ${\sum}={\sigma}^2I$ for an unknown scalar ${\sigma}^2$.

A study on the Time Series Prediction Using the Support Vector Machine (보조벡터 머신을 이용한 시계열 예측에 관한 연구)

  • 강환일;정요원;송영기
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.315-315
    • /
    • 2000
  • In this paper, we perform the time series prediction using the SVM(Support Vector Machine). We make use of two different loss functions and two different kernel functions; i) Quadratic and $\varepsilon$-insensitive loss function are used; ii) GRBF(Gaussian Radial Basis Function) and ERBF(Exponential Radial Basis Function) are used. Mackey-Glass time series are used for prediction. For both cases, we compare the results by the SVM to those by ANN(Artificial Neural Network) and show the better performance by SVM than that by ANN.

Simultaneous Optimization Using Loss Functions in Multiple Response Robust Designs

  • Kwon, Yong Man
    • Journal of Integrative Natural Science
    • /
    • v.14 no.3
    • /
    • pp.73-77
    • /
    • 2021
  • Robust design is an approach to reduce the performance variation of mutiple responses in products and processes. In fact, in many experimental designs require the simultaneous optimization of multiple responses. In this paper, we propose how to simultaneously optimize multiple responses for robust design when data are collected from a combined array. The proposed method is based on the quadratic loss function. An example is illustrated to show the proposed method.