• Title/Summary/Keyword: marginal maximum likelihood

Search Result 29, Processing Time 0.024 seconds

Extended Quasi-likelihood Estimation in Overdispersed Models

  • Kim, Choong-Rak;Lee, Kee-Won;Chung, Youn-Shik;Park, Kook-Lyeol
    • Journal of the Korean Statistical Society
    • /
    • v.21 no.2
    • /
    • pp.187-200
    • /
    • 1992
  • Samples are often found to be too heterogeneous to be explained by a one-parameter family of models in the sense that the implicit mean-variance relationship in such a family is violated by the data. This phenomenon is often called over-dispersion. The most frequently used method in dealing with over-dispersion is to mix a one-parameter family creating a two parameter marginal mixture family for the data. In this paper, we investigate performance of estimators such as maximum likelihood estimator, method of moment estimator, and maximum quasi-likelihood estimator in negative binomial and beta-binomial distribution. Simulations are done for various mean parameter and dispersion parameter in both distributions, and we conclude that the moment estimators are very superior in the sense of bias and asymptotic relative efficiency.

  • PDF

A Comparison of Bayesian and Maximum Likelihood Estimations in a SUR Tobit Regression Model (SUR 토빗회귀모형에서 베이지안 추정과 최대가능도 추정의 비교)

  • Lee, Seung-Chun;Choi, Byongsu
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.6
    • /
    • pp.991-1002
    • /
    • 2014
  • Both Bayesian and maximum likelihood methods are efficient for the estimation of regression coefficients of various Tobit regression models (see. e.g. Chib, 1992; Greene, 1990; Lee and Choi, 2013); however, some researchers recognized that the maximum likelihood method tends to underestimate the disturbance variance, which has implications for the estimation of marginal effects and the asymptotic standard error of estimates. The underestimation of the maximum likelihood estimate in a seemingly unrelated Tobit regression model is examined. A Bayesian method based on an objective noninformative prior is shown to provide proper estimates of the disturbance variance as well as other regression parameters

Further Applications of Johnson's SU-normal Distribution to Various Regression Models

  • Choi, Pilsun;Min, In-Sik
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.2
    • /
    • pp.161-171
    • /
    • 2008
  • This study discusses Johnson's $S_U$-normal distribution capturing a wide range of non-normality in various regression models. We provide the likelihood inference using Johnson's $S_U$-normal distribution, and propose a likelihood ratio (LR) test for normality. We also apply the $S_U$-normal distribution to the binary and censored regression models. Monte Carlo simulations are used to show that the LR test using the $S_U$-normal distribution can be served as a model specification test for normal error distribution, and that the $S_U$-normal maximum likelihood (ML) estimators tend to yield more reliable marginal effect estimates in the binary and censored model when the error distributions are non-normal.

Analysis of the Frailty Model with Many Ties (동측치가 많은 FRAILTY 모형의 분석)

  • Kim Yongdai;Park Jin-Kyung
    • The Korean Journal of Applied Statistics
    • /
    • v.18 no.1
    • /
    • pp.67-81
    • /
    • 2005
  • Most of the previously proposed methods for the frailty model do not work well when there are many tied observations. This is partly because the empirical likelihood used is not suitable for tied observations. In this paper, we propose a new method for the frailty model with many ties. The proposed method obtains the posterior distribution of the parameters using the binomial form empirical likelihood and Bayesian bootstrap. The proposed method yields stable results and is computationally fast. To compare the proposed method with the maximum marginal likelihood approach, we do simulations.

A Note on Performance of Conditional Akaike Information Criteria in Linear Mixed Models

  • Lee, Yonghee
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.5
    • /
    • pp.507-518
    • /
    • 2015
  • It is not easy to select a linear mixed model since the main interest for model building could be different and the number of parameters in the model could not be clearly defined. In this paper, performance of conditional Akaike Information Criteria and its bias-corrected version are compared with marginal Bayesian and Akaike Information Criteria through a simulation study. The results from the simulation study indicate that bias-corrected conditional Akaike Information Criteria shows promising performance when candidate models exclude large models containing the true model, but bias-corrected one prefers over-parametrized models more intensively when a set of candidate models increases. Marginal Bayesian and Akaike Information Criteria also have some difficulty to select the true model when the design for random effects is nested.

Noise Removal Using Complex Wavelet and Bernoulli-Gaussian Model (복소수 웨이블릿과 베르누이-가우스 모델을 이용한 잡음 제거)

  • Eom Il-Kyu;Kim Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.5 s.311
    • /
    • pp.52-61
    • /
    • 2006
  • Orthogonal wavelet tansform which is generally used in image and signal processing applications has limited performance because of lack of shift invariance and low directional selectivity. To overcome these demerits complex wavelet transform has been proposed. In this paper, we present an efficient image denoising method using dual-tree complex wavelet transform and Bernoulli-Gauss prior model. In estimating hyper-parameters for Bernoulli-Gaussian model, we present two simple and non-iterative methods. We use hypothesis-testing technique in order to estimate the mixing parameter, Bernoulli random variable. Based on the estimated mixing parameter, variance for clean signal is obtained by using maximum generalized marginal likelihood (MGML) estimator. We simulate our denoising method using dual-tree complex wavelet and compare our algorithm to well blown denoising schemes. Experimental results show that the proposed method can generate good denoising results for high frequency image with low computational cost.

Weighted Hω and New Paradox of κ (가중 합치도 Hω와 κ의 새로운 역설)

  • Kwon, Na-Young;Kim, Jin-Gon;Park, Yong-Gyu
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.5
    • /
    • pp.1073-1084
    • /
    • 2009
  • For ordinal categorical $R{\times}R$ tables, a weighted measure of association, $H_{\omega}$, was proposed and its maximum likelihood estimator and asymptotic variance were drived. We redefined the last paradox of ${\kappa}$ and proved its relation to marginal distributions. We also introduced the new paradox of ${\kappa}$ and summaried the general relationships between ${\kappa}$ and marginal distributions.

Design wind speed prediction suitable for different parent sample distributions

  • Zhao, Lin;Hu, Xiaonong;Ge, Yaojun
    • Wind and Structures
    • /
    • v.33 no.6
    • /
    • pp.423-435
    • /
    • 2021
  • Although existing algorithms can predict wind speed using historical observation data, for engineering feasibility, most use moment methods and probability density functions to estimate fitted parameters. However, extreme wind speed prediction accuracy for long-term return periods is not always dependent on how the optimized frequency distribution curves are obtained; long-term return periods emphasize general distribution effects rather than marginal distributions, which are closely related to potential extreme values. Moreover, there are different wind speed parent sample types; how to theoretically select the proper extreme value distribution is uncertain. The influence of different sampling time intervals has not been evaluated in the fitting process. To overcome these shortcomings, updated steps are introduced, involving parameter sensitivity analysis for different sampling time intervals. The extreme value prediction accuracy of unknown parent samples is also discussed. Probability analysis of mean wind is combined with estimation of the probability plot correlation coefficient and the maximum likelihood method; an iterative estimation algorithm is proposed. With the updated steps and comparison using a Monte Carlo simulation, a fitting policy suitable for different parent distributions is proposed; its feasibility is demonstrated in extreme wind speed evaluations at Longhua and Chuansha meteorological stations in Shanghai, China.

Joint Modeling of Death Times and Counts Considering a Marginal Frailty Model (공변량을 포함한 사망시간과 치료횟수의 모형화를 위한 주변환경효과모형의 적용)

  • Park, Hee-Chang;Park, Jin-Pyo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.9 no.2
    • /
    • pp.311-322
    • /
    • 1998
  • In this paper the problem of modeling count data where the observation period is determined by the survival time of the individual under study is considered. We assume marginal frailty model in the counts. We assume that the death times follow a Weibull distribution with a rate that depends on some covariates. For the counts, given a frailty, a Poisson process is assumed with the intensity depending on time and the covariates. A gamma model is assumed for the frailty. Maximum likelihood estimators of the model parameters are obtained. The model is applied to data set of patients with breast cancer who received a bone marrow transplant. A model for the time to death and the number of supportive transfusions a patient received is constructed and consequences of the model are examined.

  • PDF

Large tests of independence in incomplete two-way contingency tables using fractional imputation

  • Kang, Shin-Soo;Larsen, Michael D.
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.4
    • /
    • pp.971-984
    • /
    • 2015
  • Imputation procedures fill-in missing values, thereby enabling complete data analyses. Fully efficient fractional imputation (FEFI) and multiple imputation (MI) create multiple versions of the missing observations, thereby reflecting uncertainty about their true values. Methods have been described for hypothesis testing with multiple imputation. Fractional imputation assigns weights to the observed data to compensate for missing values. The focus of this article is the development of tests of independence using FEFI for partially classified two-way contingency tables. Wald and deviance tests of independence under FEFI are proposed. Simulations are used to compare type I error rates and Power. The partially observed marginal information is useful for estimating the joint distribution of cell probabilities, but it is not useful for testing association. FEFI compares favorably to other methods in simulations.