• Title/Summary/Keyword: statistical confidence

Search Result 1,002, Processing Time 0.029 seconds

Simultaneous Tests with Combining Functions under Normality

  • Park, Hyo-Il
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.6
    • /
    • pp.639-646
    • /
    • 2015
  • We propose simultaneous tests for mean and variance under the normality assumption. After formulating the null hypothesis and its alternative, we construct test statistics based on the individual p-values for the partial tests with combining functions and derive the null distributions for the combining functions. We then illustrate our procedure with industrial data and compare the efficiency among the combining functions with individual partial ones by obtaining empirical powers through a simulation study. A discussion then follows on the intersection-union test with a combining function and simultaneous confidence region as a simultaneous inference; in addition, we discuss weighted functions and applications to the statistical quality control. Finally we comment on nonparametric simultaneous tests.

Note on Properties of Noninformative Priors in the One-Way Random Effect Model

  • Kang, Sang Gil;Kim, Dal Ho;Cho, Jang Sik
    • Communications for Statistical Applications and Methods
    • /
    • v.9 no.3
    • /
    • pp.835-844
    • /
    • 2002
  • For the one-way random model when the ratio of the variance components is of interest, Bayesian analysis is often appropriate. In this paper, we develop the noninformative priors for the ratio of the variance components under the balanced one-way random effect model. We reveal that the second order matching prior matches alternative coverage probabilities up to the second order (Mukerjee and Reid, 1999) and is a HPD(Highest Posterior Density) matching prior. It turns out that among all of the reference priors, the only one reference prior (one-at-a-time reference prior) satisfies a second order matching criterion. Finally we show that one-at-a-time reference prior produces confidence sets with expected length shorter than the other reference priors and Cox and Reid (1987) adjustment.

A Study on Methods of Quality Check for Digital Basemaps using Statistical Methods for the Quality Control (통계적 품질관리기법을 도입한 수치지도의 검수방법에 관한 연구)

  • 김병국;서현덕
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.17 no.1
    • /
    • pp.79-86
    • /
    • 1999
  • In this study, we investigated methods of quality check for digital basemaps and proposed effective methods of quality check. We used new statistical methods for quality control in order to carry out quality check for digital basemaps. We proposed 2-stage complete sampling and 2-stage cluster sampling method to improve present statistical methods of quality check(1-stage complete sampling method). We estimated error rate and number of omitted objects using simulated data about all delivered digital basemaps and estimated variances about it. We could determine confidence interval about error rate and number of omitted objects.

  • PDF

A Study on statistical inference on IL-2 titer (IL-2 역가의 통계적 추정에 관한 연구)

  • 박래현;박석영;이석훈
    • The Korean Journal of Applied Statistics
    • /
    • v.2 no.2
    • /
    • pp.27-35
    • /
    • 1989
  • This article deals with statistical inference on Interleukin-2 titer of which the clinical applications to cancer immunotherapy and some immunodeficiency diseases have been widely tried. A Linear model and the Bayesian approach are used to explain the bioassay which performs the measurements of IL-2 activity from an patient and an inference procedure including confidence intervals for the IL-2 titer of the patient through comparision with the Standard IL-2 is suggested and a real case of example is illustrated.

  • PDF

A Logistic Regression Analysis of Two-Way Binary Attribute Data (이원 이항 계수치 자료의 로지스틱 회귀 분석)

  • Ahn, Hae-Il
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.35 no.3
    • /
    • pp.118-128
    • /
    • 2012
  • An attempt is given to the problem of analyzing the two-way binary attribute data using the logistic regression model in order to find a sound statistical methodology. It is demonstrated that the analysis of variance (ANOVA) may not be good enough, especially for the case that the proportion is very low or high. The logistic transformation of proportion data could be a help, but not sound in the statistical sense. Meanwhile, the adoption of generalized least squares (GLS) method entails much to estimate the variance-covariance matrix. On the other hand, the logistic regression methodology provides sound statistical means in estimating related confidence intervals and testing the significance of model parameters. Based on simulated data, the efficiencies of estimates are ensured with a view to demonstrate the usefulness of the methodology.

Exploiting a statistical threshold for efficiently identifying correlated pairs

  • Kim, Myoung-Ju;Park, Hee-Chang
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2006.11a
    • /
    • pp.197-203
    • /
    • 2006
  • Association rule mining searches for interesting relationships among Items in a given database. Association rules are frequently used by retail stores to assist in marketing, advertising, floor placement, and inventory control. There are three primary quality measures for association rule support and confidence and lift. If there is many item in the association rule, much time is required. Xiong(2004) studies new method which is to compute the support of upper. They used support of upper to the $\theta$. But $\theta$ is subjective. In this paper, we present statistical objective criterion for efficiently identifying correlated pairs.

  • PDF

A Study for Statistical Criterion in Negative Association Rules Using Boolean Analyzer

  • Shin, Sang-Jin;Lee, Keun-Woo
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2006.11a
    • /
    • pp.145-151
    • /
    • 2006
  • Association rule mining searches for interesting relationships among items in a given database. Association rules are frequently used by retail stores to assist in marketing, advertising, floor placement, and inventory control. There are three primary quality measures for association rule support and confidence and lift. Association rule is an interesting rule among purchased items in transaction, but the negative association rule is an interesting rule that includes items which are not purchased. Boolean Analyzer is the method to produce the negative association rule using PIM. But PIM is subjective. In this paper, we present statistical objective criterion in negative association rules using Boolean Analyzer.

  • PDF

Constraining Cosmological Parameters with Gravitational Lensed Quasars in the Sloan Digital Sky Survey

  • Han, Du-Hwan;Park, Myeong-Gu
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.39 no.1
    • /
    • pp.34-34
    • /
    • 2014
  • We investigate the constraints on the matter density ${\Omega}m$ and the cosmological constant ${\Omega}{\Lambda}$ using the gravitational lensed QSO (Quasi Stellar Object) systems from the Sloan Digital Sky Survey (SDSS) by analyzing the distribution of image separation. The main sample consists of 16 QSO lens systems with measured source and lens redshifts. We use a lensing probability that is simply defined by the gaussian distribution. We perform the curvature test and the constraints on the cosmological parameters as the statistical tests. The statistical tests have considered well-defined selection effects and adopt parameter of velocity dispersion function. We also applied the same analysis to Monte-Carlo generated mock gravitational lens samples to assess the accuracy and limit of our approach. As the results of these statistical tests, we find that only the excessively positively curved universe (${\Omega}m+{\Omega}{\Lambda}$ > 1) are rejected at 95% confidence level. However, if the informations of the galaxy as play a lens are measured accurately, we confirm that the gravitational lensing statistics would be the most powerful tool.

  • PDF

Statistical Inferences on the Lognormal Hazard Function under Type I Censored Data

  • Kil Ho Cho;In Suk Lee;Jeen Kap Choi
    • Communications for Statistical Applications and Methods
    • /
    • v.1 no.1
    • /
    • pp.20-26
    • /
    • 1994
  • The hazard function is a non-negative function that measures the propensity of failure in the immediate furture, and is frequently used as a decision criterion, especially in replacement decisions. In this paper, we compute approximate confidence intervals for the lognormal hazard function under Type I censored data, and show how to choose the sample size needed to estimate a point on the hazard function with a specified degree of precision. Also we provide a table that can be used to compute the sample size.

  • PDF

Statistical analysis of the employment future for Korea

  • Lee, SangHyuk;Park, Sang-Gue;Lee, Chan Kyu;Lim, Yaeji
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.4
    • /
    • pp.459-468
    • /
    • 2020
  • We examine the rate of substitution of jobs by artificial intelligence using a score called the "weighted ability rate of substitution (WARS)." WARS is a indicator that represents each job's potential for substitution by automation and digitalization. Since the conventional WARS is sensitive to the particular responses from the employees, we consider a robust version of the indicator. In this paper, we propose the individualized WARS, which is a modification of the conventional WARS, and compute robust averages and confidence intervals for inference. In addition, we use the clustering method to statistically classify jobs according to the proposed individualized WARS. The proposed method is applied to Korean job data, and proposed WARS are computed for five future years. Also, we observe that 747 jobs are well-clustered according to the substitution levels.