• Title/Summary/Keyword: Statistical testing

Search Result 1,208, Processing Time 0.021 seconds

Empirical Statistical Power for Testing Multilocus Genotypic Effects under Unbalanced Designs Using a Gibbs Sampler

  • Lee, Chae-Young
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.25 no.11
    • /
    • pp.1511-1514
    • /
    • 2012
  • Epistasis that may explain a large portion of the phenotypic variation for complex economic traits of animals has been ignored in many genetic association studies. A Baysian method was introduced to draw inferences about multilocus genotypic effects based on their marginal posterior distributions by a Gibbs sampler. A simulation study was conducted to provide statistical powers under various unbalanced designs by using this method. Data were simulated by combined designs of number of loci, within genotype variance, and sample size in unbalanced designs with or without null combined genotype cells. Mean empirical statistical power was estimated for testing posterior mean estimate of combined genotype effect. A practical example for obtaining empirical statistical power estimates with a given sample size was provided under unbalanced designs. The empirical statistical powers would be useful for determining an optimal design when interactive associations of multiple loci with complex phenotypes were examined.

Estimation in Group Testing when a Dilution Effect exists

  • Kwon, Se-Hyug
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.3
    • /
    • pp.787-794
    • /
    • 2006
  • In group testing, the test unit consists of a group of individuals and each group is tested to classify units from a population as infected or non-infected or estimate the infection rate. If the test group is infected, one or more individuals in the group are presumed to be infected. It is assumed in group testing that classification of group as positive or negative is without error. But, the possibility of false negatives as a result of dilution effects happens often in practice, specially in many clinical researches. In this paper, dilution effect models in group testing are discussed and estimation methods of infection rate are proposed when a dilution effect exists.

A Review on the Use of Effect Size in Nursing Research (간호학 연구에서 효과크기의 사용에 대한 고찰)

  • Kang, Hyuncheol;Yeon, Kyupil;Han, Sang-Tae
    • Journal of Korean Academy of Nursing
    • /
    • v.45 no.5
    • /
    • pp.641-649
    • /
    • 2015
  • Purpose: The purpose of this study was to introduce the main concepts of statistical testing and effect size and to provide researchers in nursing science with guidance on how to calculate the effect size for the statistical analysis methods mainly used in nursing. Methods: For t-test, analysis of variance, correlation analysis, regression analysis which are used frequently in nursing research, the generally accepted definitions of the effect size were explained. Results: Some formulae for calculating the effect size are described with several examples in nursing research. Furthermore, the authors present the required minimum sample size for each example utilizing G*Power 3 software that is the most widely used program for calculating sample size. Conclusion: It is noted that statistical significance testing and effect size measurement serve different purposes, and the reliance on only one side may be misleading. Some practical guidelines are recommended for combining statistical significance testing and effect size measure in order to make more balanced decisions in quantitative analyses.

Multiple Group Testing Procedures for Analysis of High-Dimensional Genomic Data

  • Ko, Hyoseok;Kim, Kipoong;Sun, Hokeun
    • Genomics & Informatics
    • /
    • v.14 no.4
    • /
    • pp.187-195
    • /
    • 2016
  • In genetic association studies with high-dimensional genomic data, multiple group testing procedures are often required in order to identify disease/trait-related genes or genetic regions, where multiple genetic sites or variants are located within the same gene or genetic region. However, statistical testing procedures based on an individual test suffer from multiple testing issues such as the control of family-wise error rate and dependent tests. Moreover, detecting only a few of genes associated with a phenotype outcome among tens of thousands of genes is of main interest in genetic association studies. In this reason regularization procedures, where a phenotype outcome regresses on all genomic markers and then regression coefficients are estimated based on a penalized likelihood, have been considered as a good alternative approach to analysis of high-dimensional genomic data. But, selection performance of regularization procedures has been rarely compared with that of statistical group testing procedures. In this article, we performed extensive simulation studies where commonly used group testing procedures such as principal component analysis, Hotelling's $T^2$ test, and permutation test are compared with group lasso (least absolute selection and shrinkage operator) in terms of true positive selection. Also, we applied all methods considered in simulation studies to identify genes associated with ovarian cancer from over 20,000 genetic sites generated from Illumina Infinium HumanMethylation27K Beadchip. We found a big discrepancy of selected genes between multiple group testing procedures and group lasso.

Weibull Statistical Analysis of Elevated Temperature Tensile Strength and Creep Rupture Time in Stainless Steels (스테인리스 강의 고온 인장강도와 크리프 파단시간의 와이블 통계 해석)

  • Jung, W.T.;Kim, Y.S.;Kim, S.J.
    • Journal of Power System Engineering
    • /
    • v.14 no.4
    • /
    • pp.56-62
    • /
    • 2010
  • This paper is concerned with the stochastic nature of elevated temperature tensile strength and creep rupture time in 18Cr-8Ni stainless steels. The Weibull statistical analysis using the NRIM data sheet has been performed to investigate the effects of variability of the elevated temperature tensile strength and creep rupture time on the testing temperature. From those investigations, the distributions of temperature tensile strength and creep rupture time were well followed in 2-parameter Weibull. The shape parameter and scale parameter for the Weibull distribution of tensile strength were decreased with increasing the testing temperature. For the creep rupture time, generally, the shape parameter were decreased with increasing the testing temperature.

A New Methodology for Software Reliability based on Statistical Modeling

  • Avinash S;Y.Srinivas;P.Annan naidu
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.157-161
    • /
    • 2023
  • Reliability is one of the computable quality features of the software. To assess the reliability the software reliability growth models(SRGMS) are used at different test times based on statistical learning models. In all situations, Tradational time-based SRGMS may not be enough, and such models cannot recognize errors in small and medium sized applications.Numerous traditional reliability measures are used to test software errors during application development and testing. In the software testing and maintenance phase, however, new errors are taken into consideration in real time in order to decide the reliability estimate. In this article, we suggest using the Weibull model as a computational approach to eradicate the problem of software reliability modeling. In the suggested model, a new distribution model is suggested to improve the reliability estimation method. We compute the model developed and stabilize its efficiency with other popular software reliability growth models from the research publication. Our assessment results show that the proposed Model is worthier to S-shaped Yamada, Generalized Poisson, NHPP.

Bayesian Inference for Multinomial Group Testing

  • Heo, Tae-Young;Kim, Jong-Min
    • Communications for Statistical Applications and Methods
    • /
    • v.14 no.1
    • /
    • pp.81-92
    • /
    • 2007
  • This paper consider trinomial group testing concerned with classification of N given units into one of k disjoint categories. In this paper, we propose Bayesian inference for estimating individual category proportions using the trinomial group testing model proposed by Bar-Lev et al. (2005). We compared a relative efficience (RE) based on the mean squared error (MSE) of MLE and Bayes estimators with various prior information. The impact of different prior specifications on the estimates is also investigated using selected prior distribution. The impact of different priors on the Bayes estimates is modest when the sample size and group size we large.

Parametric inference on step-stress accelerated life testing for the extension of exponential distribution under progressive type-II censoring

  • El-Dina, M.M. Mohie;Abu-Youssef, S.E.;Ali, Nahed S.A.;Abd El-Raheem, A.M.
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.4
    • /
    • pp.269-285
    • /
    • 2016
  • In this paper, a simple step-stress accelerated life test (ALT) under progressive type-II censoring is considered. Progressive type-II censoring and accelerated life testing are provided to decrease the lifetime of testing and lower test expenses. The cumulative exposure model is assumed when the lifetime of test units follows an extension of the exponential distribution. Maximum likelihood estimates (MLEs) and Bayes estimates (BEs) of the model parameters are also obtained. In addition, a real dataset is analyzed to illustrate the proposed procedures. Approximate, bootstrap and credible confidence intervals (CIs) of the estimators are then derived. Finally, the accuracy of the MLEs and BEs for the model parameters is investigated through simulation studies.

A minimum cost sampling inspection plan for destructive testing (破壤檢査詩의 最小費용 샘플링 檢査方式)

  • 趙星九;裵道善
    • Journal of the Korean Statistical Society
    • /
    • v.7 no.1
    • /
    • pp.27-43
    • /
    • 1978
  • This paper deals with the problem of obtaining a minimum cost acceptance sampling plan for destructive testing. The cost model is constructed under the assumption that the sampling procedure takes the following form; 1) lots rejected on the first sample are acreened with a non-destructive testing, 2) the screening is assumed to be imperfect, and therefore, after the screening, a second sample is taken to determine whether to accept the lot of to scrap it. The usual sampling procedures for destructive testing can be regarded as special cases of the above one. Utilizing Hald's Bayesian approach, procedures for finding the global optimal sampling plans are given. However, when the lot size is large, the global plan is very different to obtain even with the aid of an electronic computer. Therefore a method of finding suboptimal plan is suggested. An example with uniform prior is also given.

  • PDF

Multiple Testing in Genomic Sequences Using Hamming Distance

  • Kang, Moonsu
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.6
    • /
    • pp.899-904
    • /
    • 2012
  • High-dimensional categorical data models with small sample sizes have not been used extensively in genomic sequences that involve count (or discrete) or purely qualitative responses. A basic task is to identify differentially expressed genes (or positions) among a number of genes. It requires an appropriate test statistics and a corresponding multiple testing procedure so that a multivariate analysis of variance should not be feasible. A family wise error rate(FWER) is not appropriate to test thousands of genes simultaneously in a multiple testing procedure. False discovery rate(FDR) is better than FWER in multiple testing problems. The data from the 2002-2003 SARS epidemic shows that a conventional FDR procedure and a proposed test statistic based on a pseudo-marginal approach with Hamming distance performs better.