• Title/Summary/Keyword: Bayes B

Search Result 56, Processing Time 0.02 seconds

The influence of a first-order antedependence model and hyperparameters in BayesCπ for genomic prediction

  • Li, Xiujin;Liu, Xiaohong;Chen, Yaosheng
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.31 no.12
    • /
    • pp.1863-1870
    • /
    • 2018
  • Objective: The Bayesian first-order antedependence models, which specified single nucleotide polymorphisms (SNP) effects as being spatially correlated in the conventional BayesA/B, had more accurate genomic prediction than their corresponding classical counterparts. Given advantages of $BayesC{\pi}$ over BayesA/B, we have developed hyper-$BayesC{\pi}$, ante-$BayesC{\pi}$, and ante-hyper-$BayesC{\pi}$ to evaluate influences of the antedependence model and hyperparameters for $v_g$ and $s_g^2$ on $BayesC{\pi}$.Methods: Three public data (two simulated data and one mouse data) were used to validate our proposed methods. Genomic prediction performance of proposed methods was compared to traditional $BayesC{\pi}$, ante-BayesA and ante-BayesB. Results: Through both simulation and real data analyses, we found that hyper-$BayesC{\pi}$, ante-$BayesC{\pi}$ and ante-hyper-$BayesC{\pi}$ were comparable with $BayesC{\pi}$, ante-BayesB, and ante-BayesA regarding the prediction accuracy and bias, except the situation in which ante-BayesB performed significantly worse when using a few SNPs and ${\pi}=0.95$. Conclusion: Hyper-$BayesC{\pi}$ is recommended because it avoids pre-estimated total genetic variance of a trait compared with $BayesC{\pi}$ and shortens computing time compared with ante-BayesB. Although the antedependence model in $BayesC{\pi}$ did not show the advantages in our study, larger real data with high density chip may be used to validate it again in the future.

An Efficient Algorithm for NaiveBayes with Matrix Transposition (행렬 전치를 이용한 효율적인 NaiveBayes 알고리즘)

  • Lee, Jae-Moon
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.117-124
    • /
    • 2004
  • This paper proposes an efficient algorithm of NaiveBayes without loss of its accuracy. The proposed method uses the transposition of category vectors, and minimizes the computation of the probability of NaiveBayes. The proposed method was implemented on the existing framework of the text categorization, so called, AI::Categorizer and it was compared with the conventional NaiveBayes with the well-known data, Router-21578. The comparisons show that the proposed method outperforms NaiveBayes about two times with respect to the executing time.

Efficiency and Minimaxity of Bayes Sequential Procedures in Simple versus Simple Hypothesis Testing for General Nonregular Models

  • Hyun Sook Oh;Anirban DasGupta
    • Journal of the Korean Statistical Society
    • /
    • v.25 no.1
    • /
    • pp.95-110
    • /
    • 1996
  • We consider the question of efficiency of the Bayes sequential procedure with respect to the optimal fixed sample size Bayes procedure in a simple vs. simple testing problem for data coming from a general nonregular density b(.theta.)h(x)l(x < .theta.). Efficiency is defined in two different ways in these caiculations. Also, the minimax sequential risk (and minimax sequential stratage) is studied as a function of the cost of sampling.

  • PDF

Bayesian Algorithms for Evaluation and Prediction of Software Reliability (소프트웨어 신뢰도의 평가와 예측을 위한 베이지안 알고리즘)

  • Park, Man-Gon;Ray
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.1
    • /
    • pp.14-22
    • /
    • 1994
  • This paper proposes two Bayes estimators and their evaluation algorithms of the software reliability at the end testing stage in the Smith's Bayesian software reliability growth model under the data prior distribution BE(a, b), which is more general than uniform distribution, as a class of prior information. We consider both a squared-error loss function and the Harris loss function in the Bayesian estimation procedures. We also compare the MSE performances of the Bayes estimators and their algorithms of software reliability using computer simulations. And we conclude that the Bayes estimator of software reliability under the Harris loss function is more efficient than other estimators in terms of the MSE performances as a is larger and b is smaller, and that the Bayes estimators using the beta prior distribution as a conjugate prior is better than the Bayes estimators under the uniform prior distribution as a noninformative prior when a>b.

  • PDF

Development of Supervised Machine Learning based Catalog Entry Classification and Recommendation System (지도학습 머신러닝 기반 카테고리 목록 분류 및 추천 시스템 구현)

  • Lee, Hyung-Woo
    • Journal of Internet Computing and Services
    • /
    • v.20 no.1
    • /
    • pp.57-65
    • /
    • 2019
  • In the case of Domeggook B2B online shopping malls, it has a market share of over 70% with more than 2 million members and 800,000 items are sold per one day. However, since the same or similar items are stored and registered in different catalog entries, it is difficult for the buyer to search for items, and problems are also encountered in managing B2B large shopping malls. Therefore, in this study, we developed a catalog entry auto classification and recommendation system for products by using semi-supervised machine learning method based on previous huge shopping mall purchase information. Specifically, when the seller enters the item registration information in the form of natural language, KoNLPy morphological analysis process is performed, and the Naïve Bayes classification method is applied to implement a system that automatically recommends the most suitable catalog information for the article. As a result, it was possible to improve both the search speed and total sales of shopping mall by building accuracy in catalog entry efficiently.

Robustness of Bayes forecast to Non-normality

  • Bansal, Ashok K.
    • Journal of the Korean Statistical Society
    • /
    • v.7 no.1
    • /
    • pp.11-16
    • /
    • 1978
  • Bayesian procedures are in vogue to revise the parameter estimates of the forecasting model in the light of actual time series data. In this paper, we study the Bayes forecast for demand and the risk when (a) 'noise' and (b) mean demand rate in a constant process model have moderately non-normal probability distributions.

  • PDF

Development of Algorithms for Sorting Peeled Garlic Using Machnie Vison (I) - Comparison of sorting accuracy between Bayes discriminant function and neural network - (기계시각을 이용한 박피 마늘 선별 알고리즘 개발 (I) - 베이즈 판별함수와 신경회로망에 의한 설별 정확도 비교 -)

  • 이상엽;이수희;노상하;배영환
    • Journal of Biosystems Engineering
    • /
    • v.24 no.4
    • /
    • pp.325-334
    • /
    • 1999
  • The aim of this study was to present a groundwork for development of a sorting system of peeled garlics using machine vision. Images of various garlic samples such as sound, partially defective, discolored, rotten and un-peeled were obtained with a B/W machine vision system. Sorting factors which were based on normalized histogram and statistical analysis(STEPDISC Method) had good separability for various garlic samples. Bayes discriminant function and neural network sorting algorithms were developed with the sample images and were experimented on various garlic samples. It was showed that garlic samples could be classified by sorting algorithm with average sorting accuracies of 88.4% by Bayes discriminant function and 93.2% by neural network.

  • PDF

Improving Naïve Bayes Text Classifiers with Incremental Feature Weighting (점진적 특징 가중치 기법을 이용한 나이브 베이즈 문서분류기의 성능 개선)

  • Kim, Han-Joon;Chang, Jae-Young
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.457-464
    • /
    • 2008
  • In the real-world operational environment, most of text classification systems have the problems of insufficient training documents and no prior knowledge of feature space. In this regard, $Na{\ddot{i}ve$ Bayes is known to be an appropriate algorithm of operational text classification since the classification model can be evolved easily by incrementally updating its pre-learned classification model and feature space. This paper proposes the improving technique of $Na{\ddot{i}ve$ Bayes classifier through feature weighting strategy. The basic idea is that parameter estimation of $Na{\ddot{i}ve$ Bayes considers the degree of feature importance as well as feature distribution. We can develop a more accurate classification model by incorporating feature weights into Naive Bayes learning algorithm, not performing a learning process with a reduced feature set. In addition, we have extended a conventional feature update algorithm for incremental feature weighting in a dynamic operational environment. To evaluate the proposed method, we perform the experiments using the various document collections, and show that the traditional $Na{\ddot{i}ve$ Bayes classifier can be significantly improved by the proposed technique.

Morpheme Recovery Based on Naïve Bayes Model (NB 모델을 이용한 형태소 복원)

  • Kim, Jae-Hoon;Jeon, Kil-Ho
    • The KIPS Transactions:PartB
    • /
    • v.19B no.3
    • /
    • pp.195-200
    • /
    • 2012
  • In Korean, spelling change in various forms must be recovered into base forms in morphological analysis as well as part-of-speech (POS) tagging is difficult without morphological analysis because Korean is agglutinative. This is one of notorious problems in Korean morphological analysis and has been solved by morpheme recovery rules, which generate morphological ambiguity resolved by POS tagging. In this paper, we propose a morpheme recovery scheme based on machine learning methods like Na$\ddot{i}$ve Bayes models. Input features of the models are the surrounding context of the syllable which the spelling change is occurred and categories of the models are the recovered syllables. The POS tagging system with the proposed model has demonstrated the $F_1$-score of 97.5% for the ETRI tree-tagged corpus. Thus it can be decided that the proposed model is very useful to handle morpheme recovery in Korean.

Evaluation of Genome Based Estimated Breeding Values for Meat Quality in a Berkshire Population Using High Density Single Nucleotide Polymorphism Chips

  • Baby, S.;Hyeong, K.E.;Lee, Y.M.;Jung, J.H.;Oh, D.Y.;Nam, K.C.;Kim, T.H.;Lee, H.K.;Kim, Jong-Joo
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.27 no.11
    • /
    • pp.1540-1547
    • /
    • 2014
  • The accuracy of genomic estimated breeding values (GEBV) was evaluated for sixteen meat quality traits in a Berkshire population (n = 1,191) that was collected from Dasan breeding farm, Namwon, Korea. The animals were genotyped with the Illumina porcine 62 K single nucleotide polymorphism (SNP) bead chips, in which a set of 36,605 SNPs were available after quality control tests. Two methods were applied to evaluate GEBV accuracies, i.e. genome based linear unbiased prediction method (GBLUP) and Bayes B, using ASREML 3.0 and Gensel 4.0 software, respectively. The traits composed different sets of training (both genotypes and phenotypes) and testing (genotypes only) data. Under the GBLUP model, the GEBV accuracies for the training data ranged from $0.42{\pm}0.08$ for collagen to $0.75{\pm}0.02$ for water holding capacity with an average of $0.65{\pm}0.04$ across all the traits. Under the Bayes B model, the GEBV accuracy ranged from $0.10{\pm}0.14$ for National Pork Producers Council (NPCC) marbling score to $0.76{\pm}0.04$ for drip loss, with an average of $0.49{\pm}0.10$. For the testing samples, the GEBV accuracy had an average of $0.46{\pm}0.10$ under the GBLUP model, ranging from $0.20{\pm}0.18$ for protein to $0.65{\pm}0.06$ for drip loss. Under the Bayes B model, the GEBV accuracy ranged from $0.04{\pm}0.09$ for NPCC marbling score to $0.72{\pm}0.05$ for drip loss with an average of $0.38{\pm}0.13$. The GEBV accuracy increased with the size of the training data and heritability. In general, the GEBV accuracies under the Bayes B model were lower than under the GBLUP model, especially when the training sample size was small. Our results suggest that a much greater training sample size is needed to get better GEBV accuracies for the testing samples.