• Title/Summary/Keyword: Mixture of Normals

Search Result 13, Processing Time 0.018 seconds

Estimation of the Mixture of Normals of Saving Rate Using Gibbs Algorithm (Gibbs알고리즘을 이용한 저축률의 정규분포혼합 추정)

  • Yoon, Jong-In
    • Journal of Digital Convergence
    • /
    • v.13 no.10
    • /
    • pp.219-224
    • /
    • 2015
  • This research estimates the Mixture of Normals of households saving rate in Korea. Our sample is MDSS, micro-data in 2014 and Gibbs algorithm is used to estimate the Mixture of Normals. Evidences say some results. First, Gibbs algorithm works very well in estimating the Mixture of Normals. Second, Saving rate data has at least two components, one with mean zero and the other with mean 29.4%. It might be that households would be separated into high saving group and low saving group. Third, analysis of Mixture of Normals cannot answer that question and we find that income level and age cannot explain our results.

A STUDY ON THE PROBABILISTIC POWER SYSTEM PRODUCTION COSTING SIMULATION BY MONA AND MOCA METHOD (MONA 및 MOCA법에 의한 발전시뮬레이션에 관한 연구)

  • Song, K.Y.;Choi, J.S.;Kim, Y.H.
    • Proceedings of the KIEE Conference
    • /
    • 1989.07a
    • /
    • pp.207-210
    • /
    • 1989
  • In probabilistic production costing simulation, cumulant method is widely used. But this method have some limitations in some cases. To overcome these serious drawbacks, MONA(Mixture of Normals Approximation) method was proposed. The MONA method uses multiple normals to represent the Equivalent Load Duration Curve. In this paper we investigate the MONA's characteristics by comparing other methods and derive the efficient formulae for MONA. Also, we propose the fundamental algorithm for Mixture of Cumulants Approximation(MOCA) which is the general case of MONA.

  • PDF

Minimum Hellinger Distance Estimation and Minimum Density Power Divergence Estimation in Estimating Mixture Proportions

  • Pak, Ro-Jin
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.4
    • /
    • pp.1159-1165
    • /
    • 2005
  • Basu et al. (1998) proposed a new density-based estimator, called the minimum density power divergence estimator (MDPDE), which avoid the use of nonparametric density estimation and associated complication such as bandwidth selection. Woodward et al. (1995) examined the minimum Hellinger distance estimator (MHDE), proposed by Beran (1977), in the case of estimation of the mixture proportion in the mixture of two normals. In this article, we introduce the MDPDE for a mixture proportion, and show that both the MDPDE and the MHDE have the same asymptotic distribution at a model. Simulation study identifies some cases where the MHDE is consistently better than the MDPDE in terms of bias.

  • PDF

Improvement of the Modified James-Stein Estimator with Shrinkage Point and Constraints on the Norm

  • Kim, Jae Hyun;Baek, Hoh Yoo
    • Journal of Integrative Natural Science
    • /
    • v.6 no.4
    • /
    • pp.251-255
    • /
    • 2013
  • For the mean vector of a p-variate normal distribution ($p{\geq}4$), the optimal estimation within the class of modified James-Stein type decision rules under the quadratic loss is given when the underlying distribution is that of a variance mixture of normals and when the norm ${\parallel}{\theta}-\bar{\theta}1{\parallel}$ it known.

A STUDY ON THE PROBABILISTIC PRODUCTION COST SIMULATION BY THE MIXTURE OF CUMULANTS APPROXIMATION (MIXTURE OF CUMULANTS APPROXIMATION 법에 의한 발전시뮬레이션에 관한 연구)

  • Song, K.Y.;Kim, Y.H.;Cha, J.M.
    • Proceedings of the KIEE Conference
    • /
    • 1990.07a
    • /
    • pp.154-157
    • /
    • 1990
  • This paper describes a new method of calculating expected energy generation and loss of load probability (L.O.L.P) for electric power system operation and expansion planning. The method represents an equivalent load duration curve (E.L.D.C) as a mixture of cumulants approximation (M.O.C.A), which is the general case of mixture of normals approximation (M.O.N.A). By regarding a load distribution as many normal distributions-rather than one normal distribution-and representing each of them in terms of Gram-Charller expansion, we could improve the accuracy of results. We developed an algorithm which automatically determines the number of distribution and demarcation points. In modelling of a supply system, we made subsets of generators according to the number of generator outage: since the calculation of each subset's moment needs to be processed rapidly, we futher developed specific recursive formulae. The method is applied to the test systems and the results are compared with those of cumulant, M.O.N.A and Booth-Baleriaux method. It is verified that the M.O.C.A method is faster and more accurate than any other methods.

  • PDF

A nonparametric Bayesian seemingly unrelated regression model (비모수 베이지안 겉보기 무관 회귀모형)

  • Jo, Seongil;Seok, Inhae;Choi, Taeryon
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.4
    • /
    • pp.627-641
    • /
    • 2016
  • In this paper, we consider a seemingly unrelated regression (SUR) model and propose a nonparametric Bayesian approach to SUR with a Dirichlet process mixture of normals for modeling an unknown error distribution. Posterior distributions are derived based on the proposed model, and the posterior inference is performed via Markov chain Monte Carlo methods based on the collapsed Gibbs sampler of a Dirichlet process mixture model. We present a simulation study to assess the performance of the model. We also apply the model to precipitation data over South Korea.

Lindley Type Estimators When the Norm is Restricted to an Interval

  • Baek, Hoh-Yoo;Lee, Jeong-Mi
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.4
    • /
    • pp.1027-1039
    • /
    • 2005
  • Consider the problem of estimating a $p{\times}1$ mean vector $\theta(p\geq4)$ under the quadratic loss, based on a sample $X_1$, $X_2$, $\cdots$, $X_n$. We find a Lindley type decision rule which shrinks the usual one toward the mean of observations when the underlying distribution is that of a variance mixture of normals and when the norm $\parallel\;{\theta}-\bar{{\theta}}1\;{\parallel}$ is restricted to a known interval, where $bar{{\theta}}=\frac{1}{p}\;\sum\limits_{i=1}^{p}{\theta}_i$ and 1 is the column vector of ones. In this case, we characterize a minimal complete class within the class of Lindley type decision rules. We also characterize the subclass of Lindley type decision rules that dominate the sample mean.

  • PDF

Lindley Type Estimation with Constrains on the Norm

  • Baek, Hoh-Yoo;Han, Kyou-Hwan
    • Honam Mathematical Journal
    • /
    • v.25 no.1
    • /
    • pp.95-115
    • /
    • 2003
  • Consider the problem of estimating a $p{\times}1$ mean vector ${\theta}(p{\geq}4)$ under the quadratic loss, based on a sample $X_1,\;{\cdots}X_n$. We find an optimal decision rule within the class of Lindley type decision rules which shrink the usual one toward the mean of observations when the underlying distribution is that of a variance mixture of normals and when the norm $||{\theta}-{\bar{\theta}}1||$ is known, where ${\bar{\theta}}=(1/p)\sum_{i=1}^p{\theta}_i$ and 1 is the column vector of ones. When the norm is restricted to a known interval, typically no optimal Lindley type rule exists but we characterize a minimal complete class within the class of Lindley type decision rules. We also characterize the subclass of Lindley type decision rules that dominate the sample mean.

  • PDF

An improvement of estimators for the multinormal mean vector with the known norm

  • Kim, Jaehyun;Baek, Hoh Yoo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.2
    • /
    • pp.435-442
    • /
    • 2017
  • Consider the problem of estimating a $p{\times}1$ mean vector ${\theta}$ (p ${\geq}$ 3) under the quadratic loss from multi-variate normal population. We find a James-Stein type estimator which shrinks towards the projection vectors when the underlying distribution is that of a variance mixture of normals. In this case, the norm ${\parallel}{\theta}-K{\theta}{\parallel}$ is known where K is a projection vector with rank(K) = q. The class of this type estimator is quite general to include the class of the estimators proposed by Merchand and Giri (1993). We can derive the class and obtain the optimal type estimator. Also, this research can be applied to the simple and multiple regression model in the case of rank(K) ${\geq}2$.

Lindley Type Estimators with the Known Norm

  • Baek, Hoh-Yoo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.11 no.1
    • /
    • pp.37-45
    • /
    • 2000
  • Consider the problem of estimating a $p{\times}1$ mean vector ${\underline{\theta}}(p{\geq}4)$ under the quadratic loss, based on a sample ${\underline{x}_{1}},\;{\cdots}{\underline{x}_{n}}$. We find an optimal decision rule within the class of Lindley type decision rules which shrink the usual one toward the mean of observations when the underlying distribution is that of a variance mixture of normals and when the norm ${\parallel}\;{\underline{\theta}}\;-\;{\bar{\theta}}{\underline{1}}\;{\parallel}$ is known, where ${\bar{\theta}}=(1/p){\sum_{i=1}^p}{\theta}_i$ and $\underline{1}$ is the column vector of ones.

  • PDF