• Title/Summary/Keyword: Bayesian Procedure

Search Result 172, Processing Time 0.022 seconds

Adaptive Noise Reduction Algorithm for an Image Based on a Bayesian Method

  • Kim, Yeong-Hwa;Nam, Ji-Ho
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.4
    • /
    • pp.619-628
    • /
    • 2012
  • Noise reduction is an important issue in the field of image processing because image noise lowers the quality of the original pure image. The basic difficulty is that the noise and the signal are not easily distinguished. Simple smoothing is the most basic and important procedure to effectively remove the noise; however, the weakness is that the feature area is simultaneously blurred. In this research, we use ways to measure the degree of noise with respect to the degree of image features and propose a Bayesian noise reduction method based on MAP (maximum a posteriori). Simulation results show that the proposed adaptive noise reduction algorithm using Bayesian MAP provides good performance regardless of the level of noise variance.

Geostatistics for Bayesian interpretation of geophysical data

  • Oh Seokhoon;Lee Duk Kee;Yang Junmo;Youn Yong-Hoon
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2003.11a
    • /
    • pp.340-343
    • /
    • 2003
  • This study presents a practical procedure for the Bayesian inversion of geophysical data by Markov chain Monte Carlo (MCMC) sampling and geostatistics. We have applied geostatistical techniques for the acquisition of prior model information, and then the MCMC method was adopted to infer the characteristics of the marginal distributions of model parameters. For the Bayesian inversion of dipole-dipole array resistivity data, we have used the indicator kriging and simulation techniques to generate cumulative density functions from Schlumberger array resistivity data and well logging data, and obtained prior information by cokriging and simulations from covariogram models. The indicator approach makes it possible to incorporate non-parametric information into the probabilistic density function. We have also adopted the MCMC approach, based on Gibbs sampling, to examine the characteristics of a posteriori probability density function and the marginal distribution of each parameter. This approach provides an effective way to treat Bayesian inversion of geophysical data and reduce the non-uniqueness by incorporating various prior information.

  • PDF

The Bivariate Kumaraswamy Weibull regression model: a complete classical and Bayesian analysis

  • Fachini-Gomes, Juliana B.;Ortega, Edwin M.M.;Cordeiro, Gauss M.;Suzuki, Adriano K.
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.5
    • /
    • pp.523-544
    • /
    • 2018
  • Bivariate distributions play a fundamental role in survival and reliability studies. We consider a regression model for bivariate survival times under right-censored based on the bivariate Kumaraswamy Weibull (Cordeiro et al., Journal of the Franklin Institute, 347, 1399-1429, 2010) distribution to model the dependence of bivariate survival data. We describe some structural properties of the marginal distributions. The method of maximum likelihood and a Bayesian procedure are adopted to estimate the model parameters. We use diagnostic measures based on the local influence and Bayesian case influence diagnostics to detect influential observations in the new model. We also show that the estimates in the bivariate Kumaraswamy Weibull regression model are robust to deal with the presence of outliers in the data. In addition, we use some measures of goodness-of-fit to evaluate the bivariate Kumaraswamy Weibull regression model. The methodology is illustrated by means of a real lifetime data set for kidney patients.

Safety Analysis using bayesian approach (베이지안 기법을 이용한 안전사고 예측기법)

  • Yang, Hee-Joong
    • Journal of the Korea Safety Management & Science
    • /
    • v.9 no.5
    • /
    • pp.1-5
    • /
    • 2007
  • We construct the procedure to predict safety accidents following Bayesian approach. We make a model that can utilize the data to predict other levels of accidents. An event tree model which is a frequently used graphical tool in describing accident initiation and escalation to more severe accident is transformed into an influence diagram model. Prior distributions for accident occurrence rate and probabilities to escalating to more severe accidents are assumed and likelihood of number of accidents in a given period of time is assessed. And then posterior distributions are obtained based on observed data. We also points out the advantages of the bayesian approach that estimates the whole distribution of accident rate over the classical point estimation.

A Bayesian joint model for continuous and zero-inflated count data in developmental toxicity studies

  • Hwang, Beom Seuk
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.2
    • /
    • pp.239-250
    • /
    • 2022
  • In many applications, we frequently encounter correlated multiple outcomes measured on the same subject. Joint modeling of such multiple outcomes can improve efficiency of inference compared to independent modeling. For instance, in developmental toxicity studies, fetal weight and number of malformed pups are measured on the pregnant dams exposed to different levels of a toxic substance, in which the association between such outcomes should be taken into account in the model. The number of malformations may possibly have many zeros, which should be analyzed via zero-inflated count models. Motivated by applications in developmental toxicity studies, we propose a Bayesian joint modeling framework for continuous and count outcomes with excess zeros. In our model, zero-inflated Poisson (ZIP) regression model would be used to describe count data, and a subject-specific random effects would account for the correlation across the two outcomes. We implement a Bayesian approach using MCMC procedure with data augmentation method and adaptive rejection sampling. We apply our proposed model to dose-response analysis in a developmental toxicity study to estimate the benchmark dose in a risk assessment.

Development of the 'Three-stage' Bayesian procedure and a reliability data processing code (3단계 베이지안 처리절차 및 신뢰도 자료 처리 코드 개발)

  • 임태진
    • Korean Management Science Review
    • /
    • v.11 no.2
    • /
    • pp.1-27
    • /
    • 1994
  • A reliability data processing MPRDP (Multi-Purpose Reliability Data Processor) has been developed in FORTRAN language since Jan. 1992 at KAERI (Korean Atomic Energy Research Institute). The purpose of the research is to construct a reliability database(plant-specific as well as generic) by processing various kinds of reliability data in most objective and systematic fashion. To account for generic estimates in various compendia as well as generic plants' operating experience, we developed a 'three-stage' Bayesian procedure[1] by logically combining the 'two-stage' procedure[2] and the idea for processing generic estimates[3]. The first stage manipulates generic plant data to determine a set of estimates for generic parameters,e.g. the mean and the error factor, which accordingly defines a generic failure rate distribution. Then the second stage combines these estimates with the other ones proposed by various generic compendia (we call these generic book type data). This stage adopts another Bayesian procedure to determine the final generic failure rate distribution which is to be used as a priori distribution in the third stage. Then the third stage updates the generic distribution by plant-specific data resulting in a posterior failure rate distribution. Both running failure and demand failure data can be handled in this code. In accordance with the growing needs for a consistent and well-structured reliability database, we constructed a generic reliability database by the MPRDP code[4]. About 30 generic data sources were reviewed and available data were collected and screened from them. We processed reliability data for about 100 safety related components frequently modeled in PSA. The underlying distribution for the failure rate was assumed to be lognormal or gamma, according to the PSA convention. The dependencies among the generic sources were not considered at this time. This problem will be approached in further study.

  • PDF

The Analysis of Roll Call Data from the 18th Korean National Assembly: A Bayesian Approach (제 18대 국회 기명투표 분석: 베이즈(Bayesian) 방법론 적용)

  • Hahn, Kyu S.;Kim, Yuneung;Lim, Jongho;Lim, Johan;Kwon, Suhyun;Lee, Kyeong Eun
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.4
    • /
    • pp.523-541
    • /
    • 2014
  • We apply a Bayesian estimation procedure to the analysis of roll call voting records on 2,389 bills processed during the 18th Korean National Assembly. The analysis of roll calls yields useful tools for to combining the measurement of legislative preference with the models of legislative behavior. The current Bayesian procedure is extremely exible, applicable to any legislative setting, irrespective of the extremism of the legislator's voting history or the number of roll calls available for analysis. It can be applied to any legislative settings, providing a useful solution to many statistical problems inherent in the analysis of roll call voting records. We rst estimate the ideal points of all members of the 18th National Assembly and their condence intervals. Subsequently, using the estimated ideal points, we examine the factional disparity within each major party using the estimated ideal points. Our results clearly suggest that there exists a meaningful ideological spectrum within each party. We also show how the Bayesian procedure can easily be extended to accommodate theoretically interesting theoretical models of legislative behavior. More specically, we demonstrate how the estimated posterior probabilities can be used for identifying pivotal legislators.

Sensitivity Analysis for Operation a Reservoir System to Hydrologic Forecast Accuracy (수문학적 예측의 정확도에 따른 저수지 시스템 운영의 민감도 분석)

  • Kim, Yeong-O
    • Journal of Korea Water Resources Association
    • /
    • v.31 no.6
    • /
    • pp.855-862
    • /
    • 1998
  • This paper investigates the impact of the forecast error on performance of a reservoir system for hydropower production. Forecast error is measured as th Root Mean Square Error (RMSE) and parametrically varied within a Generalized Maintenance Of Variance Extension (GMOVE) procedure. A set of transition probabilities are calculated as a function of the RMSE of the GMOVE procedure and then incorporated into a Bayesian Stochastic Dynamic Programming model which derives monthly operating policies and assesses their performance. As a case study, the proposed methodology is applied to the Skagit Hydropower System (SHS) in Washington state. The results show that the system performance is a nonlinear function of RMSE and therefor suggested that continued improvements in the current forecast accuracy correspond to gradually greater increase in performance of the SHS.

  • PDF

Identifying differentially expressed genes using the Polya urn scheme

  • Saraiva, Erlandson Ferreira;Suzuki, Adriano Kamimura;Milan, Luis Aparecido
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.6
    • /
    • pp.627-640
    • /
    • 2017
  • A common interest in gene expression data analysis is to identify genes that present significant changes in expression levels among biological experimental conditions. In this paper, we develop a Bayesian approach to make a gene-by-gene comparison in the case with a control and more than one treatment experimental condition. The proposed approach is within a Bayesian framework with a Dirichlet process prior. The comparison procedure is based on a model selection procedure developed using the discreteness of the Dirichlet process and its representation via Polya urn scheme. The posterior probabilities for models considered are calculated using a Gibbs sampling algorithm. A numerical simulation study is conducted to understand and compare the performance of the proposed method in relation to usual methods based on analysis of variance (ANOVA) followed by a Tukey test. The comparison among methods is made in terms of a true positive rate and false discovery rate. We find that proposed method outperforms the other methods based on ANOVA followed by a Tukey test. We also apply the methodologies to a publicly available data set on Plasmodium falciparum protein.

Numerical Bayesian updating of prior distributions for concrete strength properties considering conformity control

  • Caspeele, Robby;Taerwe, Luc
    • Advances in concrete construction
    • /
    • v.1 no.1
    • /
    • pp.85-102
    • /
    • 2013
  • Prior concrete strength distributions can be updated by using direct information from test results as well as by taking into account indirect information due to conformity control. Due to the filtering effect of conformity control, the distribution of the material property in the accepted inspected lots will have lower fraction defectives in comparison to the distribution of the entire production (before or without inspection). A methodology is presented to quantify this influence in a Bayesian framework based on prior knowledge with respect to the hyperparameters of concrete strength distributions. An algorithm is presented in order to update prior distributions through numerical integration, taking into account the operating characteristic of the applied conformity criteria, calculated based on Monte Carlo simulations. Different examples are given to derive suitable hyperparameters for incoming strength distributions of concrete offered for conformity assessment, using updated available prior information, maximum-likelihood estimators or a bootstrap procedure. Furthermore, the updating procedure based on direct as well as indirect information obtained by conformity assessment is illustrated and used to quantify the filtering effect of conformity criteria on concrete strength distributions in case of a specific set of conformity criteria.