• 제목/요약/키워드: Designing Statistical Test

검색결과 47건 처리시간 0.025초

Designing Statistical Test for Mean of Random Profiles

  • Bahri, Mehrab;Hadi-Vencheh, Abdollah
    • Industrial Engineering and Management Systems
    • /
    • 제15권4호
    • /
    • pp.432-445
    • /
    • 2016
  • A random profile is the result of a process, the output of which is a function instead of a scalar or vector quantity. In the nature of these objects, two main dimensions of "functionality" and "randomness" can be recognized. Valuable researches have been conducted to present control charts for monitoring such processes in which a regression approach has been applied by focusing on "randomness" of profiles. Performing other statistical techniques such as hypothesis testing for different parameters, comparing parameters of two populations, ANOVA, DOE, etc. has been postponed thus far, because the "functional" nature of profiles is ignored. In this paper, first, some needed theorems are proven with an applied approach, so that be understandable for an engineer which is unfamiliar with advanced mathematical analysis. Then, as an application of that, a statistical test is designed for mean of continuous random profiles. Finally, using experimental operating characteristic curves obtained in computer simulation, it is demonstrated that the presented tests are properly able to recognize deviations in the null hypothesis.

Development of an Item Selection Method for Test-Construction by using a Relationship Structure among Abilities

  • Kim, Sung-Ho;Jeong, Mi-Sook;Kim, Jung-Ran
    • Communications for Statistical Applications and Methods
    • /
    • 제8권1호
    • /
    • pp.193-207
    • /
    • 2001
  • When designing a test set, we need to consider constraints on items that are deemed important by item developers or test specialists. The constraints are essentially on the components of the test domain or abilities relevant to a given test set. And so if the test domain could be represented in a more refined form, test construction would be made in a more efficient way. We assume that relationships among task abilities are representable by a causal model and that the item response theory (IRT) is not fully available for them. In such a case we can not apply traditional item selection methods that are based on the IRT. In this paper, we use entropy as an uncertainty measure for making inferences on task abilities and developed an optimal item selection algorithm which reduces most the entropy of task abilities when items are selected from an item pool.

  • PDF

Designing an Assessment to Measure Students' Inferential Reasoning in Statistics: The First Study, Development of a Test Blueprint

  • Park, Jiyoon
    • 한국수학교육학회지시리즈D:수학교육연구
    • /
    • 제17권4호
    • /
    • pp.243-266
    • /
    • 2013
  • Accompanied with ongoing calls for reform in statistics curriculum, mathematics and statistics teachers purposefully have been reconsidering the curriculum and the content taught in statistics classes. Changes made are centered around statistical inference since teachers recognize that students struggle with understanding the ideas and concepts used in statistical reasoning. Despite the efforts to change the curriculum, studies are sparse on the topic of characterizing student learning and understanding of statistical inference. Moreover, there are no tools to evaluate students' statistical reasoning in a coherent way. In response to the need for a research instrument, in a series of research study, the researcher developed a reliable and valid measure to assess students' inferential reasoning in statistics (IRS). This paper describes processes of test blueprint development that has been conducted from review of the literature and expert reviews.

진단검사의 특성 추정을 위한 표본크기 (Sample Size Requirements in Diagnostic Test Performance Studies)

  • 박선일;오태호
    • 한국임상수의학회지
    • /
    • 제32권1호
    • /
    • pp.73-77
    • /
    • 2015
  • There has been increasing attention on sample size requirements in peer reviewed medical literatures. Accordingly, a statistically-valid sample size determination has been described for a variety of medical situations including diagnostic test accuracy studies. If the sample is too small, the estimate is too inaccurate to be useful. On the other hand, a very large sample size would yield the estimate with more accurate than required but may be costly and inefficient. Choosing the optimal sample size depends on statistical considerations, such as the desired precision, statistical power, confidence level and prevalence of disease, and non-statistical considerations, such as resources, cost and sample availability. In a previous paper (J Vet Clin 2012; 29: 68-77) we briefly described the statistical theory behind sample size calculations and provided practical methods of calculating sample size in different situations for different research purposes. This review describes how to calculate sample sizes when assessing diagnostic test performance such as sensitivity and specificity alone. Also included in this paper are tables and formulae to help researchers for designing diagnostic test studies and calculating sample size in studies evaluating test performance. For complex studies clinicians are encouraged to consult a statistician to help in the design and analysis for an accurate determination of the sample size.

계기 검교정간의 보증시험 절차의 개발 (Development of Measurement Assurance Test Procedures between Calibrations)

  • 염봉진;조재균;이동화
    • 산업공학
    • /
    • 제6권1호
    • /
    • pp.55-65
    • /
    • 1993
  • A nonstandard instrument used in the filed frequently becomes out-of-calibration due to environmental noise, misuse, aging, etc. A substantial amount of loss may result if such nonstandard instrument is used to check product quality and performance. Traditional periodic calibration at the calibration center is not capable of detecting out-of-calibration status while the instrument is in use, and therefore, statistical methods need to be developed to check the status of a nonstandard instrument in the field between calibrations. Developed in this paper is a unified measurement assurance model in which statistical calibration at the calibration center and measurement assurance test in the filed are combined. We developed statistical procedures to detect changes in precision and in the coefficients of the calibration equation. Futher, computational experiments are conducted to evaluate how the power of test varies with respect to the parameters involved. Based upon the computational results we suggest procedures for designing effective measurement assurance tests.

  • PDF

반응적응 시험설계법을 이용하는 통계적 해석모델 검증 기법 연구 (A Study on the Statistical Model Validation using Response-adaptive Experimental Design)

  • 정병창;허영철;문석준;김영중
    • 한국소음진동공학회:학술대회논문집
    • /
    • 한국소음진동공학회 2014년도 추계학술대회 논문집
    • /
    • pp.347-349
    • /
    • 2014
  • Model verification and validation (V&V) is a current research topic to build computational models with high predictive capability by addressing the general concepts, processes and statistical techniques. The hypothesis test for validity check is one of the model validation techniques and gives a guideline to evaluate the validity of a computational model when limited experimental data only exist due to restricted test resources (e.g., time and budget). The hypothesis test for validity check mainly employ Type I error, the risk of rejecting the valid computational model, for the validity evaluation since quantification of Type II error is not feasible for model validation. However, Type II error, the risk of accepting invalid computational model, should be importantly considered for an engineered products having high risk on predicted results. This paper proposes a technique named as the response-adaptive experimental design to reduce Type II error by adaptively designing experimental conditions for the validation experiment. A tire tread block problem and a numerical example are employed to show the effectiveness of the response-adaptive experimental design for the validity evaluation.

  • PDF

A New Similarity Measure Based on Intraclass Statistics for Biometric Systems

  • Lee, Kwan-Yong;Park, Hye-Young
    • ETRI Journal
    • /
    • 제25권5호
    • /
    • pp.401-406
    • /
    • 2003
  • A biometric system determines the identity of a person by measuring physical features that can distinguish that person from others. Since biometric features have many variations and can be easily corrupted by noises and deformations, it is necessary to apply machine learning techniques to treat the data. When applying the conventional machine learning methods in designing a specific biometric system, however, one first runs into the difficulty of collecting sufficient data for each person to be registered to the system. In addition, there can be an almost infinite number of variations of non-registered data. Therefore, it is difficult to analyze and predict the distributional properties of real data that are essential for the system to deal with in practical applications. These difficulties require a new framework of identification and verification that is appropriate and efficient for the specific situations of biometric systems. As a preliminary solution, this paper proposes a simple but theoretically well-defined method based on a statistical test theory. Our computational experiments on real-world data show that the proposed method has potential for coping with the actual difficulties in biometrics.

  • PDF

회전기계 고장 진단에 적용한 인공 신경회로망과 통계적 패턴 인식 기법의 비교 연구 (A Comparison of Artificial Neural Networks and Statistical Pattern Recognition Methods for Rotation Machine Condition Classification)

  • 김창구;박광호;기창두
    • 한국정밀공학회지
    • /
    • 제16권12호
    • /
    • pp.119-125
    • /
    • 1999
  • This paper gives an overview of the various approaches to designing statistical pattern recognition scheme based on Bayes discrimination rule and the artificial neural networks for rotating machine condition classification. Concerning to Bayes discrimination rule, this paper contains the linear discrimination rule applied to classification into several multivariate normal distributions with common covariance matrices, the quadratic discrimination rule under different covariance matrices. Also we discribes k-nearest neighbor method to directly estimate a posterior probability of each class. Five features are extracted in time domain vibration signals. Employing these five features, statistical pattern classifier and neural networks have been established to detect defects on rotating machine. Four different cases of rotation machine were observed. The effects of k number and neural networks structures on monitoring performance have also been investigated. For the comparison of diagnosis performance of these two method, their recognition success rates are calculated form the test data. The result of experiment which classifies the rotating machine conditions using each method presents that the neural networks shows the highest recognition rate.

  • PDF

The Useful Techniques to Determine the Prior Odds and the Likelihood Ratios Bayesian Processor in Built-In-Test System

  • Yoo, Wang-Jin;Kim, Kyeong Taek
    • 품질경영학회지
    • /
    • 제24권1호
    • /
    • pp.61-72
    • /
    • 1996
  • It is very important to determine the likelihood ratios and the prior odds for designing a Bayesian processor in Built-In-Test system. Using traditional statistics, it is not difficult to determine the initial prior odds from the field data. For a newly designed system, development testing data or laboratory testing data could be used to replace field data. The likelihood ratios which playa key role in the Bayesian processor must be carefully determined, based on laboratory testing and statistical techniques. In this paper, expressing and determining the likelihood ratios by Geometric areas, Test, and Analytical method will be presented.

  • PDF

Sample Size and Statistical Power Calculation in Genetic Association Studies

  • Hong, Eun-Pyo;Park, Ji-Wan
    • Genomics & Informatics
    • /
    • 제10권2호
    • /
    • pp.117-122
    • /
    • 2012
  • A sample size with sufficient statistical power is critical to the success of genetic association studies to detect causal genes of human complex diseases. Genome-wide association studies require much larger sample sizes to achieve an adequate statistical power. We estimated the statistical power with increasing numbers of markers analyzed and compared the sample sizes that were required in case-control studies and case-parent studies. We computed the effective sample size and statistical power using Genetic Power Calculator. An analysis using a larger number of markers requires a larger sample size. Testing a single-nucleotide polymorphism (SNP) marker requires 248 cases, while testing 500,000 SNPs and 1 million markers requires 1,206 cases and 1,255 cases, respectively, under the assumption of an odds ratio of 2, 5% disease prevalence, 5% minor allele frequency, complete linkage disequilibrium (LD), 1:1 case/control ratio, and a 5% error rate in an allelic test. Under a dominant model, a smaller sample size is required to achieve 80% power than other genetic models. We found that a much lower sample size was required with a strong effect size, common SNP, and increased LD. In addition, studying a common disease in a case-control study of a 1:4 case-control ratio is one way to achieve higher statistical power. We also found that case-parent studies require more samples than case-control studies. Although we have not covered all plausible cases in study design, the estimates of sample size and statistical power computed under various assumptions in this study may be useful to determine the sample size in designing a population-based genetic association study.