• Title/Summary/Keyword: Maximum likelihood analysis

Search Result 606, Processing Time 0.026 seconds

Application of the Weibull-Poisson long-term survival model

  • Vigas, Valdemiro Piedade;Mazucheli, Josmar;Louzada, Francisco
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.4
    • /
    • pp.325-337
    • /
    • 2017
  • In this paper, we proposed a new long-term lifetime distribution with four parameters inserted in a risk competitive scenario with decreasing, increasing and unimodal hazard rate functions, namely the Weibull-Poisson long-term distribution. This new distribution arises from a scenario of competitive latent risk, in which the lifetime associated to the particular risk is not observable, and where only the minimum lifetime value among all risks is noticed in a long-term context. However, it can also be used in any other situation as long as it fits the data well. The Weibull-Poisson long-term distribution is presented as a particular case for the new exponential-Poisson long-term distribution and Weibull long-term distribution. The properties of the proposed distribution were discussed, including its probability density, survival and hazard functions and explicit algebraic formulas for its order statistics. Assuming censored data, we considered the maximum likelihood approach for parameter estimation. For different parameter settings, sample sizes, and censoring percentages various simulation studies were performed to study the mean square error of the maximum likelihood estimative, and compare the performance of the model proposed with the particular cases. The selection criteria Akaike information criterion, Bayesian information criterion, and likelihood ratio test were used for the model selection. The relevance of the approach was illustrated on two real datasets of where the new model was compared with its particular cases observing its potential and competitiveness.

Development of an Inversion Analysis Technique for Downhole Testing and Continuous Seismic CPT

  • Joh, Sung-Ho;Mok, Young-Jin
    • Geotechnical Engineering
    • /
    • v.14 no.3
    • /
    • pp.95-108
    • /
    • 1998
  • Downhole testing and seismic CPT (SCPT) have been widely used to evaluate stiffness profiles of the subgrade. Advantages of downhole testing and SCPT such as low cost, easy operation and a simple seismic source have got these testings more frequently adopted in site investigation. For the automated analysis of downhole testing and SCPT, the concept of interval measurements has been practiced. In this paper. a new inversion procedure to deal tilth the interval measurements for the automated downhole testing and SCPT (including a newlydeveloped continuous SCPT) is proposed. The forward modeling in the new inversion procedure incorporates ray path theory based on Snell's law. The formulation for the inversion analysis is derived from the maximum likelihood approach, which estimates the maximum likelihood of obtaining a particular travel time from a source to a receiver. Verification of the new inversion procedure was performed with numerical simulations of SCPT using synthesized profiles. The results of the inversion analyses performed for the synthetic data show that the new inversion analysis is a valid procedure which enhances Va profiles determined by downhole testing and SCPT.

  • PDF

Development of Classification Method for the Remote Sensing Digital Image Using Canonical Correlation Analysis (정준상관분석을 이용한 원격탐사 수치화상 분류기법의 개발 : 무감독분류기법과 정준상관분석의 통합 알고리즘)

  • Kim, Yong-Il;Kim, Dong-Hyun;Park, Min-Ho
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.4 no.2 s.8
    • /
    • pp.181-193
    • /
    • 1996
  • A new technique for land cover classification which applies digital image pre-classified by unsupervised classification technique, clustering, to Canonical Correlation Analysis(CCA) was proposed in this paper. Compared with maximum likelihood classification, the proposed technique had a good flexibility in selecting training areas. This implies that any selected position of training areas has few effects on classification results. Land cover of each cluster designated by CCA after clustering is able to be used as prior information for maximum likelihood classification. In case that the same training areas are used, accuracy of classification using Canonical Correlation Analysis after cluster analysis is better than that of maximum likelihood classification. Therefore, a new technique proposed in this study will be able to be put to practical use. Moreover this will play an important role in the construction of GIS database

  • PDF

Performance Analysis of the Multi Preambles Using Gold Codes in a WBAN System (WBAN 시스템에서 골드 코드를 이용한 다중 프리앰블의 성능 분석)

  • Oh, Jun-Seok;Ryu, Seung-Moon;Eun, Chang-Soo
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.47 no.8
    • /
    • pp.32-41
    • /
    • 2010
  • We propose the use of multi-preambles using Gold codes and analyze its performance. The multi-preamble is a way of utilizing different codes for preambles according to operation modes or applications in a system. The receiver can be easily implemented using the maximum likelihood algorithm. The performance is robust against noise due to the good correlation characteristic of the Gold codes. We use 128-bit-long multi-preambles generated by 127 bit Gold codes in deriving the detection error probability and in verifying the validity through computer simulation. The results show that the theory and the experiment are in good agreement within the approximation error.

A Study of Estimation Method for Auto-Regressive Model with Non-Normal Error and Its Prediction Accuracy (비정규 오차를 고려한 자기회귀모형의 추정법 및 예측성능에 관한 연구)

  • Lim, Bo Mi;Park, Cheong-Sool;Kim, Jun Seok;Kim, Sung-Shick;Baek, Jun-Geol
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.39 no.2
    • /
    • pp.109-118
    • /
    • 2013
  • We propose a method for estimating coefficients of AR (autoregressive) model which named MLPAR (Maximum Likelihood of Pearson system for Auto-Regressive model). In the present method for estimating coefficients of AR model, there is an assumption that residual or error term of the model follows the normal distribution. In common cases, we can observe that the error of AR model does not follow the normal distribution. So the normal assumption will cause decreasing prediction accuracy of AR model. In the paper, we propose the MLPAR which does not assume the normal distribution of error term. The MLPAR estimates coefficients of auto-regressive model and distribution moments of residual by using pearson distribution system and maximum likelihood estimation. Comparing proposed method to auto-regressive model, results are shown to verify improved performance of the MLPAR in terms of prediction accuracy.

A Missing Data Imputation by Combining K Nearest Neighbor with Maximum Likelihood Estimation for Numerical Software Project Data (K-NN과 최대 우도 추정법을 결합한 소프트웨어 프로젝트 수치 데이터용 결측값 대치법)

  • Lee, Dong-Ho;Yoon, Kyung-A;Bae, Doo-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.4
    • /
    • pp.273-282
    • /
    • 2009
  • Missing data is one of the common problems in building analysis or prediction models using software project data. Missing imputation methods are known to be more effective missing data handling method than deleting methods in small software project data. While K nearest neighbor imputation is a proper missing imputation method in the software project data, it cannot use non-missing information of incomplete project instances. In this paper, we propose an approach to missing data imputation for numerical software project data by combining K nearest neighbor and maximum likelihood estimation; we also extend the average absolute error measure by normalization for accurate evaluation. Our approach overcomes the limitation of K nearest neighbor imputation and outperforms on our real data sets.

Maximum Likelihood-based Automatic Lexicon Generation for AI Assistant-based Interaction with Mobile Devices

  • Lee, Donghyun;Park, Jae-Hyun;Kim, Kwang-Ho;Park, Jeong-Sik;Kim, Ji-Hwan;Jang, Gil-Jin;Park, Unsang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4264-4279
    • /
    • 2017
  • In this paper, maximum likelihood-based automatic lexicon generation using mixed-syllables is proposed for unlimited vocabulary voice interface for East Asian languages (e.g. Korean, Chinese and Japanese) in AI-assistant based interaction with mobile devices. The conventional lexicon has two inevitable problems: 1) a tedious repetition of out-of-lexicon unit additions to the lexicon, and 2) the propagation of errors during a morpheme analysis and space segmentation. The proposed method provides an automatic framework to solve the above problems. The proposed method produces a level of overall accuracy similar to one of previous methods in the presence of one out-of-lexicon word in a sentence, but the proposed method provides superior results with the absolute improvements of 1.62%, 5.58%, and 10.09% in terms of word accuracy when the number of out-of-lexicon words in a sentence was two, three and four, respectively.

Preformance Comparison of MLE Technique with POF(Pencil of Functions) Method for SEM Parameter Estimation (SEM 파라메타 측정에 대한 MLE 기법과 POF 기법의 성능비교)

  • Kim, Deok-Nyeon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.4
    • /
    • pp.511-516
    • /
    • 1994
  • Parameter estimation techniques are discussed for the complex frequency analysis of an electromagnetic scatterer. The paper suggests how the Maximum Likelihood estimation technique can be applied for this purpose. Experiments on hypothetical data sets demonstrate that the Maximum Likelihood technique is better than the Pencil of Functions technique. Although there have been several techniques including MLE suggested as tools of the parameter estimation, the proposed method has strong advantages under the noise-contaminated sample data environment because it uses minimal dimension of system matrix that stands totally independent of the length of extracted data set.

  • PDF

Bayesian and maximum likelihood estimations from exponentiated log-logistic distribution based on progressive type-II censoring under balanced loss functions

  • Chung, Younshik;Oh, Yeongju
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.5
    • /
    • pp.425-445
    • /
    • 2021
  • A generalization of the log-logistic (LL) distribution called exponentiated log-logistic (ELL) distribution on lines of exponentiated Weibull distribution is considered. In this paper, based on progressive type-II censored samples, we have derived the maximum likelihood estimators and Bayes estimators for three parameters, the survival function and hazard function of the ELL distribution. Then, under the balanced squared error loss (BSEL) and the balanced linex loss (BLEL) functions, their corresponding Bayes estimators are obtained using Lindley's approximation (see Jung and Chung, 2018; Lindley, 1980), Tierney-Kadane approximation (see Tierney and Kadane, 1986) and Markov Chain Monte Carlo methods (see Hastings, 1970; Gelfand and Smith, 1990). Here, to check the convergence of MCMC chains, the Gelman and Rubin diagnostic (see Gelman and Rubin, 1992; Brooks and Gelman, 1997) was used. On the basis of their risks, the performances of their Bayes estimators are compared with maximum likelihood estimators in the simulation studies. In this paper, research supports the conclusion that ELL distribution is an efficient distribution to modeling data in the analysis of survival data. On top of that, Bayes estimators under various loss functions are useful for many estimation problems.

Maximum likelihood estimation of stochastic volatility models with leverage effect and fat-tailed distribution using hidden Markov model approximation (두꺼운 꼬리 분포와 레버리지효과를 포함하는 확률변동성모형에 대한 최우추정: HMM근사를 이용한 최우추정)

  • Kim, TaeHyung;Park, JeongMin
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.4
    • /
    • pp.501-515
    • /
    • 2022
  • Despite the stylized statistical features of returns of financial returns such as fat-tailed distribution and leverage effect, no stochastic volatility models that can explicitly capture these features have been presented in the existing frequentist approach. we propose an approximate parameterization of stochastic volatility models that can explicitly capture the fat-tailed distribution and leverage effect of financial returns and a maximum likelihood estimation of the model using Langrock et al. (2012)'s hidden Markov model approximation in a frequentist approach. Through extensive simulation experiments and an empirical analysis, we present the statistical evidences validating the efficacy and accuracy of proposed parameterization.