• 제목/요약/키워드: Lindley type decision rule

검색결과 4건 처리시간 0.018초

Lindley Type Estimation with Constrains on the Norm

  • Baek, Hoh-Yoo;Han, Kyou-Hwan
    • 호남수학학술지
    • /
    • 제25권1호
    • /
    • pp.95-115
    • /
    • 2003
  • Consider the problem of estimating a $p{\times}1$ mean vector ${\theta}(p{\geq}4)$ under the quadratic loss, based on a sample $X_1,\;{\cdots}X_n$. We find an optimal decision rule within the class of Lindley type decision rules which shrink the usual one toward the mean of observations when the underlying distribution is that of a variance mixture of normals and when the norm $||{\theta}-{\bar{\theta}}1||$ is known, where ${\bar{\theta}}=(1/p)\sum_{i=1}^p{\theta}_i$ and 1 is the column vector of ones. When the norm is restricted to a known interval, typically no optimal Lindley type rule exists but we characterize a minimal complete class within the class of Lindley type decision rules. We also characterize the subclass of Lindley type decision rules that dominate the sample mean.

  • PDF

Lindley Type Estimators When the Norm is Restricted to an Interval

  • Baek, Hoh-Yoo;Lee, Jeong-Mi
    • Journal of the Korean Data and Information Science Society
    • /
    • 제16권4호
    • /
    • pp.1027-1039
    • /
    • 2005
  • Consider the problem of estimating a $p{\times}1$ mean vector $\theta(p\geq4)$ under the quadratic loss, based on a sample $X_1$, $X_2$, $\cdots$, $X_n$. We find a Lindley type decision rule which shrinks the usual one toward the mean of observations when the underlying distribution is that of a variance mixture of normals and when the norm $\parallel\;{\theta}-\bar{{\theta}}1\;{\parallel}$ is restricted to a known interval, where $bar{{\theta}}=\frac{1}{p}\;\sum\limits_{i=1}^{p}{\theta}_i$ and 1 is the column vector of ones. In this case, we characterize a minimal complete class within the class of Lindley type decision rules. We also characterize the subclass of Lindley type decision rules that dominate the sample mean.

  • PDF

Lindley Type Estimators with the Known Norm

  • Baek, Hoh-Yoo
    • Journal of the Korean Data and Information Science Society
    • /
    • 제11권1호
    • /
    • pp.37-45
    • /
    • 2000
  • Consider the problem of estimating a $p{\times}1$ mean vector ${\underline{\theta}}(p{\geq}4)$ under the quadratic loss, based on a sample ${\underline{x}_{1}},\;{\cdots}{\underline{x}_{n}}$. We find an optimal decision rule within the class of Lindley type decision rules which shrink the usual one toward the mean of observations when the underlying distribution is that of a variance mixture of normals and when the norm ${\parallel}\;{\underline{\theta}}\;-\;{\bar{\theta}}{\underline{1}}\;{\parallel}$ is known, where ${\bar{\theta}}=(1/p){\sum_{i=1}^p}{\theta}_i$ and $\underline{1}$ is the column vector of ones.

  • PDF

Optimal Estimation within Class of James-Stein Type Decision Rules on the Known Norm

  • Baek, Hoh Yoo
    • 통합자연과학논문집
    • /
    • 제5권3호
    • /
    • pp.186-189
    • /
    • 2012
  • For the mean vector of a p-variate normal distribution ($p{\geq}3$), the optimal estimation within the class of James-Stein type decision rules under the quadratic loss are given when the underlying distribution is that of a variance mixture of normals and when the norm ${\parallel}\underline{{\theta}}{\parallel}$ in known. It also demonstrated that the optimal estimation within the class of Lindley type decision rules under the same loss when the underlying distribution is the previous type and the norm ${\parallel}{\theta}-\overline{\theta}\underline{1}{\parallel}$ with $\overline{\theta}=\frac{1}{p}\sum\limits_{i=1}^{n}{\theta}_i$ and $\underline{1}=(1,{\cdots},1)^{\prime}$ is known.