• Title/Summary/Keyword: Log Likelihood Function

Search Result 96, Processing Time 0.022 seconds

Bayesian and maximum likelihood estimations from exponentiated log-logistic distribution based on progressive type-II censoring under balanced loss functions

  • Chung, Younshik;Oh, Yeongju
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.5
    • /
    • pp.425-445
    • /
    • 2021
  • A generalization of the log-logistic (LL) distribution called exponentiated log-logistic (ELL) distribution on lines of exponentiated Weibull distribution is considered. In this paper, based on progressive type-II censored samples, we have derived the maximum likelihood estimators and Bayes estimators for three parameters, the survival function and hazard function of the ELL distribution. Then, under the balanced squared error loss (BSEL) and the balanced linex loss (BLEL) functions, their corresponding Bayes estimators are obtained using Lindley's approximation (see Jung and Chung, 2018; Lindley, 1980), Tierney-Kadane approximation (see Tierney and Kadane, 1986) and Markov Chain Monte Carlo methods (see Hastings, 1970; Gelfand and Smith, 1990). Here, to check the convergence of MCMC chains, the Gelman and Rubin diagnostic (see Gelman and Rubin, 1992; Brooks and Gelman, 1997) was used. On the basis of their risks, the performances of their Bayes estimators are compared with maximum likelihood estimators in the simulation studies. In this paper, research supports the conclusion that ELL distribution is an efficient distribution to modeling data in the analysis of survival data. On top of that, Bayes estimators under various loss functions are useful for many estimation problems.

Assessing the accuracy of the maximum likelihood estimator in logistic regression models (로지스틱 회귀모형에서 최우추정량의 정확도 산정)

  • 이기원;손건태;정윤식
    • The Korean Journal of Applied Statistics
    • /
    • v.6 no.2
    • /
    • pp.393-399
    • /
    • 1993
  • When we compute the maximum likelihood estimators of the parameters for the logistic regression models, which are useful in studying the relationship between the binary response variable and the explanatory variable, the standard error calculations are usually based on the second derivative of log-likelihood function. On the other hand, an estimator of the Fisher information motivated from the fact that the expectation of the cross-product of the first derivative of the log-likelihood function gives the Fisher information is expected to have similar asymptotic properties. These estimators of Fisher information are closely related with the iterative algorithm to get the maximum likelihood estimator. The average numbers of iterations to achieve the maximum likelihood estimator are compared to find out which method is more efficient, and the estimators of the variance from each method are compared as estimators of the asymptotic variance.

  • PDF

Estimation for the extreme value distribution under progressive Type-I interval censoring

  • Nam, Sol-Ji;Kang, Suk-Bok
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.3
    • /
    • pp.643-653
    • /
    • 2014
  • In this paper, we propose some estimators for the extreme value distribution based on the interval method and mid-point approximation method from the progressive Type-I interval censored sample. Because log-likelihood function is a non-linear function, we use a Taylor series expansion to derive approximate likelihood equations. We compare the proposed estimators in terms of the mean squared error by using the Monte Carlo simulation.

Maximum Likelihood Estimation of Continuous-time Diffusion Models for Exchange Rates

  • Choi, Seungmoon;Lee, Jaebum
    • East Asian Economic Review
    • /
    • v.24 no.1
    • /
    • pp.61-87
    • /
    • 2020
  • Five diffusion models are estimated using three different foreign exchange rates to find an appropriate model for each. Daily spot exchange rates expressed as the prices of 1 euro, 1 British pound and 100 Japanese yen in US dollars, respectively denoted by USD/EUR, USD/GBP, and USD/100JPY, are used. The maximum likelihood estimation method is implemented after deriving an approximate log-transition density function (log-TDF) of the diffusion processes because the true log-TDF is unknown. Of the five models, the most general model is the best fit for the USD/GBP, and USD/100JPY exchange rates, but it is not the case for the case of USD/EUR. Although we could not find any evidence of the mean-reverting property for the USD/EUR exchange rate, the USD/GBP, and USD/100JPY exchange rates show the mean-reversion behavior. Interestingly, the volatility function of the USD/EUR exchange rate is increasing in the exchange rate while the volatility functions of the USD/GBP and USD/100Yen exchange rates have a U-shape. Our results reveal that more care has to be taken when determining a diffusion model for the exchange rate. The results also imply that we may have to use a more general diffusion model than those proposed in the literature when developing economic theories for the behavior of the exchange rate and pricing foreign currency options or derivatives.

A Comparative Study of Software Reliability Model Considering Log Type Mean Value Function (로그형 평균값함수를 고려한 소프트웨어 신뢰성모형에 대한 비교연구)

  • Shin, Hyun Cheul;Kim, Hee Cheul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.10 no.4
    • /
    • pp.19-27
    • /
    • 2014
  • Software reliability in the software development process is an important issue. Software process improvement helps in finishing with reliable software product. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, proposes the reliability model with log type mean value function (Musa-Okumoto and log power model), which made out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on mean square error (MSE) and coefficient of determination($R^2$), for the sake of efficient model, was employed. Analysis of failure using real data set for the sake of proposing log type mean value function was employed. This analysis of failure data compared with log type mean value function. In order to insurance for the reliability of data, Laplace trend test was employed. In this study, the log type model is also efficient in terms of reliability because it (the coefficient of determination is 70% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, software developers have to consider the growth model by prior knowledge of the software to identify failure modes which can be able to help.

Butterfly Log-MAP Decoding Algorithm

  • Hou, Jia;Lee, Moon Ho;Kim, Chang Joo
    • Journal of Communications and Networks
    • /
    • v.6 no.3
    • /
    • pp.209-215
    • /
    • 2004
  • In this paper, a butterfly Log-MAP decoding algorithm for turbo code is proposed. Different from the conventional turbo decoder, we derived a generalized formula to calculate the log-likelihood ratio (LLR) and drew a modified butterfly states diagram in 8-states systematic turbo coded system. By comparing the complexity of conventional implementations, the proposed algorithm can efficiently reduce both the computations and work units without bit error ratio (BER) performance degradation.

Mixed Effects Kernel Binomial Regression

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.4
    • /
    • pp.1327-1334
    • /
    • 2008
  • Mixed effect binomial regression models are widely used for analysis of correlated count data in which the response is the result of a series of one of two possible disjoint outcomes. In this paper, we consider kernel extensions with nonparametric fixed effects and parametric random effects. The estimation is through the penalized likelihood method based on kernel trick, and our focus is on the efficient computation and the effective hyperparameter selection. For the selection of hyperparameters, cross-validation techniques are employed. Examples illustrating usage and features of the proposed method are provided.

  • PDF

A Study of Log-Fourier Deconvolution

  • Ja Yong Koo;Hyun Suk Park
    • Communications for Statistical Applications and Methods
    • /
    • v.4 no.3
    • /
    • pp.833-845
    • /
    • 1997
  • Fourier expansion is considered for the deconvolution problem of estimating a probability density function when the sample observations are contaminated with random noise. In the log-Fourier method of density estimation for data without noise, the logarithm of the unknown density function is approximated by a trigonometric function, the unknown parameters of which are estimated by maximum likelihood. The log-Fourier density estimation method, which has been considered theoretically by Koo and Chung (1997), is studied for the finite-sample case with noise. Numerical examples using simulated data are given to show the performance of the log-Fourier deconvolution.

  • PDF

General Log-Likelihood Ratio Expression and Its Implementation Algorithm for Gray-Coded QAM Signals

  • Kim, Ki-Seol;Hyun, Kwang-Min;Yu, Chang-Wahn;Park, Youn-Ok;Yoon, Dong-Weon;Park, Sang-Kyu
    • ETRI Journal
    • /
    • v.28 no.3
    • /
    • pp.291-300
    • /
    • 2006
  • A simple and general bit log-likelihood ratio (LLR) expression is provided for Gray-coded rectangular quadrature amplitude modulation (R-QAM) signals. The characteristics of Gray code mapping such as symmetries and repeated formats of the bit assignment in a symbol among bit groups are applied effectively for the simplification of the LLR expression. In order to reduce the complexity of the max-log-MAP algorithm for LLR calculation, we replace the mathematical max or min function of the conventional LLR expression with simple arithmetic functions. In addition, we propose an implementation algorithm of this expression. Because the proposed expression is very simple and constructive with some parameters reflecting the characteristic of the Gray code mapping result, it can easily be implemented, providing an efficient symbol de-mapping structure for various wireless applications.

  • PDF

Mutual Information and Redundancy for Categorical Data

  • Hong, Chong-Sun;Kim, Beom-Jun
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.2
    • /
    • pp.297-307
    • /
    • 2006
  • Most methods for describing the relationship among random variables require specific probability distributions and some assumptions of random variables. The mutual information based on the entropy to measure the dependency among random variables does not need any specific assumptions. And the redundancy which is a analogous version of the mutual information was also proposed. In this paper, the redundancy and mutual information are explored to multi-dimensional categorical data. It is found that the redundancy for categorical data could be expressed as the function of the generalized likelihood ratio statistic under several kinds of independent log-linear models, so that the redundancy could also be used to analyze contingency tables. Whereas the generalized likelihood ratio statistic to test the goodness-of-fit of the log-linear models is sensitive to the sample size, the redundancy for categorical data does not depend on sample size but its cell probabilities itself.