• Title/Summary/Keyword: Lindley type decision rule

Search Result 4, Processing Time 0.015 seconds

Lindley Type Estimation with Constrains on the Norm

  • Baek, Hoh-Yoo;Han, Kyou-Hwan
    • Honam Mathematical Journal
    • /
    • v.25 no.1
    • /
    • pp.95-115
    • /
    • 2003
  • Consider the problem of estimating a $p{\times}1$ mean vector ${\theta}(p{\geq}4)$ under the quadratic loss, based on a sample $X_1,\;{\cdots}X_n$. We find an optimal decision rule within the class of Lindley type decision rules which shrink the usual one toward the mean of observations when the underlying distribution is that of a variance mixture of normals and when the norm $||{\theta}-{\bar{\theta}}1||$ is known, where ${\bar{\theta}}=(1/p)\sum_{i=1}^p{\theta}_i$ and 1 is the column vector of ones. When the norm is restricted to a known interval, typically no optimal Lindley type rule exists but we characterize a minimal complete class within the class of Lindley type decision rules. We also characterize the subclass of Lindley type decision rules that dominate the sample mean.

  • PDF

Lindley Type Estimators When the Norm is Restricted to an Interval

  • Baek, Hoh-Yoo;Lee, Jeong-Mi
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.4
    • /
    • pp.1027-1039
    • /
    • 2005
  • Consider the problem of estimating a $p{\times}1$ mean vector $\theta(p\geq4)$ under the quadratic loss, based on a sample $X_1$, $X_2$, $\cdots$, $X_n$. We find a Lindley type decision rule which shrinks the usual one toward the mean of observations when the underlying distribution is that of a variance mixture of normals and when the norm $\parallel\;{\theta}-\bar{{\theta}}1\;{\parallel}$ is restricted to a known interval, where $bar{{\theta}}=\frac{1}{p}\;\sum\limits_{i=1}^{p}{\theta}_i$ and 1 is the column vector of ones. In this case, we characterize a minimal complete class within the class of Lindley type decision rules. We also characterize the subclass of Lindley type decision rules that dominate the sample mean.

  • PDF

Lindley Type Estimators with the Known Norm

  • Baek, Hoh-Yoo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.11 no.1
    • /
    • pp.37-45
    • /
    • 2000
  • Consider the problem of estimating a $p{\times}1$ mean vector ${\underline{\theta}}(p{\geq}4)$ under the quadratic loss, based on a sample ${\underline{x}_{1}},\;{\cdots}{\underline{x}_{n}}$. We find an optimal decision rule within the class of Lindley type decision rules which shrink the usual one toward the mean of observations when the underlying distribution is that of a variance mixture of normals and when the norm ${\parallel}\;{\underline{\theta}}\;-\;{\bar{\theta}}{\underline{1}}\;{\parallel}$ is known, where ${\bar{\theta}}=(1/p){\sum_{i=1}^p}{\theta}_i$ and $\underline{1}$ is the column vector of ones.

  • PDF

Optimal Estimation within Class of James-Stein Type Decision Rules on the Known Norm

  • Baek, Hoh Yoo
    • Journal of Integrative Natural Science
    • /
    • v.5 no.3
    • /
    • pp.186-189
    • /
    • 2012
  • For the mean vector of a p-variate normal distribution ($p{\geq}3$), the optimal estimation within the class of James-Stein type decision rules under the quadratic loss are given when the underlying distribution is that of a variance mixture of normals and when the norm ${\parallel}\underline{{\theta}}{\parallel}$ in known. It also demonstrated that the optimal estimation within the class of Lindley type decision rules under the same loss when the underlying distribution is the previous type and the norm ${\parallel}{\theta}-\overline{\theta}\underline{1}{\parallel}$ with $\overline{\theta}=\frac{1}{p}\sum\limits_{i=1}^{n}{\theta}_i$ and $\underline{1}=(1,{\cdots},1)^{\prime}$ is known.