• Title/Summary/Keyword: bayesian test

Search Result 243, Processing Time 0.029 seconds

Mixed-Initiative Interaction between Human and Service Robot using Hierarchical Bayesian Networks (계층적 베이지안 네트워크를 사용한 서비스 로봇과 인간의 상호 주도방식 의사소통)

  • Song Youn-Suk;Hong Jin-Hyuk;Cho Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.3
    • /
    • pp.344-355
    • /
    • 2006
  • In daily activities, the interaction between humans and robots is very important for supporting the user's task effectively. Dialogue may be useful to increase the flexibility and facility of interaction between them. Traditional studies of robots have only dealt with simple queries like commands for interaction, but in real conversation it is more complex and various for using many ways of expression, so people can often omit some words relying on the background knowledge or the context of the discourse. Since the same queries can have various meaning by this reason, it is needed to manage this situation. In this paper we propose a method that uses hierarchical bayesian networks to implement mixed-initiative interaction for managing vagueness of conversation in the service robot. We have verified the usefulness of the proposed method through the simulation of the service robot and usability test.

Estimation of Bigeye tuna Production Function of Distant Longline Fisheries in WCPFC waters (WCPFC 수역 원양연승어업의 눈다랑어 생산함수 추정)

  • Jo, Heon-Ju;Kim, Do-Hoon;Kim, Doo-Nam;Lee, Sung-Il;Lee, Mi-Kyung
    • Environmental and Resource Economics Review
    • /
    • v.28 no.3
    • /
    • pp.415-435
    • /
    • 2019
  • The purpose of this study is to analyze the returns to scale by estimating the bigeye tuna production function of Korean distant longline fisheries in WCFPC waters. In the analysis, number of crews, vessel tonnage, number of hooks, and bigeye tuna biomass are used as input variables and the catch amount of bigeye tuna is used as an output variable in the Cobb-Douglas production function. Prior to the function estimation, the biomass of bigeye tuna was estimated by the Bayesian state-space model. Results showed that the fixed effect model was selected based on the hausman test, and vessel tonnage, hooks, and biomass would have direct effects on the catch amount. In addition, it was shown that the bigeye tuna distant longline fisheries in WCFPC water would have increasing returns to scale.

Degradation Quantification Method and Degradation and Creep Life Prediction Method for Nickel-Based Superalloys Based on Bayesian Inference (베이지안 추론 기반 니켈기 초합금의 열화도 정량화 방법과 열화도 및 크리프 수명 예측의 방법)

  • Junsang, Yu;Hayoung, Oh
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.27 no.1
    • /
    • pp.15-26
    • /
    • 2023
  • The purpose of this study is to determine the artificial intelligence-based degradation index from the image of the cross-section of the microstructure taken with a scanning electron microscope of the specimen obtained by the creep test of DA-5161 SX, a nickel-based superalloy used as a material for high-temperature parts. It proposes a new method of quantification and proposes a model that predicts degradation based on Bayesian inference without destroying components of high-temperature parts of operating equipment and a creep life prediction model that predicts Larson-Miller Parameter (LMP). It is proposed that the new degradation indexing method that infers a consistent representative value from a small amount of images based on the geometrical characteristics of the gamma prime phase, a nickel-base superalloy microstructure, and the prediction method of degradation index and LMP with information on the environmental conditions of the material without destroying high-temperature parts.

A Bayes Reliability Estimation from Life Test in a Stress-Strength Model

  • Park, Sung-Sub;Kim, Jae-Joo
    • Journal of the Korean Statistical Society
    • /
    • v.12 no.1
    • /
    • pp.1-9
    • /
    • 1983
  • A stress-strength model is formulated for s out of k system of identical components. We consider the estimation of system reliability from survival count data from a Bayesian viewpoint. We assume a quadratic loss and a Dirichlet prior distribution. It is shown that a Bayes sequential procedure can be established. The Bayes estimator is compared with the UMVUE obtained by Bhattacharyya and with an estimator based on Mann-Whitney statistic.

  • PDF

Estimation of Gini-Simpson index for SNP data

  • Kang, Joonsung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1557-1564
    • /
    • 2017
  • We take genomic sequences of high-dimensional low sample size (HDLSS) without ordering of response categories into account. When constructing an appropriate test statistics in this model, the classical multivariate analysis of variance (MANOVA) approach might not be useful owing to very large number of parameters and very small sample size. For these reasons, we present a pseudo marginal model based upon the Gini-Simpson index estimated via Bayesian approach. In view of small sample size, we consider the permutation distribution by every possible n! (equally likely) permutation of the joined sample observations across G groups of (sizes $n_1,{\ldots}n_G$). We simulate data and apply false discovery rate (FDR) and positive false discovery rate (pFDR) with associated proposed test statistics to the data. And we also analyze real SARS data and compute FDR and pFDR. FDR and pFDR procedure along with the associated test statistics for each gene control the FDR and pFDR respectively at any level ${\alpha}$ for the set of p-values by using the exact conditional permutation theory.

Bayesian Image Denoising with Mixed Prior Using Hypothesis-Testing Problem (가설-검증 문제를 이용한 혼합 프라이어를 가지는 베이지안 영상 잡음 제거)

  • Eom Il-Kyu;Kim Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.3 s.309
    • /
    • pp.34-42
    • /
    • 2006
  • In general, almost information is stored in only a few wavelet coefficients. This sparse characteristic of wavelet coefficient can be modeled by the mixture of Gaussian probability density function and point mass at zero, and denoising for this prior model is peformed by using Bayesian estimation. In this paper, we propose a method of parameter estimation for denoising using hypothesis-testing problem. Hypothesis-testing problem is applied to variance of wavelet coefficient, and $X^2$-test is used. Simulation results show our method outperforms about 0.3dB higher PSNR(peak signal-to-noise ratio) gains compared to the states-of-art denoising methods when using orthogonal wavelets.

Updated confidence intervals for the COVID-19 antibody retention rate in the Korean population

  • Kamruzzaman, Md.;Apio, Catherine;Park, Taesung
    • Genomics & Informatics
    • /
    • v.18 no.4
    • /
    • pp.45.1-45.5
    • /
    • 2020
  • With the ongoing rise of coronavirus disease 2019 (COVID-19) pandemic across the globe, interests in COVID-19 antibody testing, also known as a serology test has grown, as a way to measure how far the infection has spread in the population and to identify individuals who may be immune. Recently, many countries reported their population based antibody titer study results. South Korea recently reported their third antibody formation rate, where it divided the study between the general population and the young male youths in their early twenties. As previously stated, these simple point estimates may be misinterpreted without proper estimation of standard error and confidence intervals. In this article, we provide an updated 95% confidence intervals for COVID-19 antibody formation rate for the Korean population using asymptotic, exact and Bayesian statistical estimation methods. As before, we found that the Wald method gives the narrowest interval among all asymptotic methods whereas mid p-value gives the narrowest among all exact methods and Jeffrey's method gives the narrowest from Bayesian method. The most conservative 95% confidence interval estimation shows that as of 00:00 November 23, 2020, at least 69,524 people were infected but not confirmed. It also shows that more positive cases were found among the young male in their twenties (0.22%), three times that of the general public (0.051%). This thereby calls for the quarantine authorities' need to strengthen quarantine managements for the early twenties in order to find the hidden infected people in the population.

Crack segmentation in high-resolution images using cascaded deep convolutional neural networks and Bayesian data fusion

  • Tang, Wen;Wu, Rih-Teng;Jahanshahi, Mohammad R.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.221-235
    • /
    • 2022
  • Manual inspection of steel box girders on long span bridges is time-consuming and labor-intensive. The quality of inspection relies on the subjective judgements of the inspectors. This study proposes an automated approach to detect and segment cracks in high-resolution images. An end-to-end cascaded framework is proposed to first detect the existence of cracks using a deep convolutional neural network (CNN) and then segment the crack using a modified U-Net encoder-decoder architecture. A Naïve Bayes data fusion scheme is proposed to reduce the false positives and false negatives effectively. To generate the binary crack mask, first, the original images are divided into 448 × 448 overlapping image patches where these image patches are classified as cracks versus non-cracks using a deep CNN. Next, a modified U-Net is trained from scratch using only the crack patches for segmentation. A customized loss function that consists of binary cross entropy loss and the Dice loss is introduced to enhance the segmentation performance. Additionally, a Naïve Bayes fusion strategy is employed to integrate the crack score maps from different overlapping crack patches and to decide whether a pixel is crack or not. Comprehensive experiments have demonstrated that the proposed approach achieves an 81.71% mean intersection over union (mIoU) score across 5 different training/test splits, which is 7.29% higher than the baseline reference implemented with the original U-Net.

Comparison of nomograms designed to predict hypertension with a complex sample (고혈압 예측을 위한 노모그램 구축 및 비교)

  • Kim, Min Ho;Shin, Min Seok;Lee, Jea Young
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.5
    • /
    • pp.555-567
    • /
    • 2020
  • Hypertension has a steadily increasing incidence rate as well as represents a risk factors for secondary diseases such as cardiovascular disease. Therefore, it is important to predict the incidence rate of the disease. In this study, we constructed nomograms that can predict the incidence rate of hypertension. We use data from the Korean National Health and Nutrition Examination Survey (KNHANES) for 2013-2016. The complex sampling data required the use of a Rao-Scott chi-squared test to identify 10 risk factors for hypertension. Smoking and exercise variables were not statistically significant in the Logistic regression; therefore, eight effects were selected as risk factors for hypertension. Logistic and Bayesian nomograms constructed from the selected risk factors were proposed and compared. The constructed nomograms were then verified using a receiver operating characteristics curve and calibration plot.

Recurrent Neural Network Modeling of Etch Tool Data: a Preliminary for Fault Inference via Bayesian Networks

  • Nawaz, Javeria;Arshad, Muhammad Zeeshan;Park, Jin-Su;Shin, Sung-Won;Hong, Sang-Jeen
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.02a
    • /
    • pp.239-240
    • /
    • 2012
  • With advancements in semiconductor device technologies, manufacturing processes are getting more complex and it became more difficult to maintain tighter process control. As the number of processing step increased for fabricating complex chip structure, potential fault inducing factors are prevail and their allowable margins are continuously reduced. Therefore, one of the key to success in semiconductor manufacturing is highly accurate and fast fault detection and classification at each stage to reduce any undesired variation and identify the cause of the fault. Sensors in the equipment are used to monitor the state of the process. The idea is that whenever there is a fault in the process, it appears as some variation in the output from any of the sensors monitoring the process. These sensors may refer to information about pressure, RF power or gas flow and etc. in the equipment. By relating the data from these sensors to the process condition, any abnormality in the process can be identified, but it still holds some degree of certainty. Our hypothesis in this research is to capture the features of equipment condition data from healthy process library. We can use the health data as a reference for upcoming processes and this is made possible by mathematically modeling of the acquired data. In this work we demonstrate the use of recurrent neural network (RNN) has been used. RNN is a dynamic neural network that makes the output as a function of previous inputs. In our case we have etch equipment tool set data, consisting of 22 parameters and 9 runs. This data was first synchronized using the Dynamic Time Warping (DTW) algorithm. The synchronized data from the sensors in the form of time series is then provided to RNN which trains and restructures itself according to the input and then predicts a value, one step ahead in time, which depends on the past values of data. Eight runs of process data were used to train the network, while in order to check the performance of the network, one run was used as a test input. Next, a mean squared error based probability generating function was used to assign probability of fault in each parameter by comparing the predicted and actual values of the data. In the future we will make use of the Bayesian Networks to classify the detected faults. Bayesian Networks use directed acyclic graphs that relate different parameters through their conditional dependencies in order to find inference among them. The relationships between parameters from the data will be used to generate the structure of Bayesian Network and then posterior probability of different faults will be calculated using inference algorithms.

  • PDF