• Title/Summary/Keyword: jackknife method

Search Result 44, Processing Time 0.021 seconds

Analysis of Repeated Measurement Problem in SP data (SP 데이터의 Repeated Measurement Problem 분석)

  • CHO, Hye-Jin
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.1
    • /
    • pp.111-119
    • /
    • 2002
  • One of the advantages of SP methods is the possibility of getting a number of responses from each respondent. However, when the repeated observations from each respondent are analysed by applying the simple modeling method, a potential problem is created because of upbiased significance due to the repeated observation from each respondent. This study uses a variety of approaches to explore this issue and to test the robustness of the simple model estimates. Among several different approaches, the Jackknife method and Kocurs method were applied. The Jackknife method was implemented using a program JACKKNIFE. The model estimate results of Jackknife method and Kocurs method were compared with those of the uncorrected estimates in order to test whether there was repeated measurement problem or not and the extent to which this problem affected the model estimates. The standard errors between the uncorrected model estimates and Jackknife estimates were also compared. The results reveals that the t-ratios of Kocurs are much lower than those of the uncorrected method and Jackknife estimates, indicating that Kocurs method underestimates the significance of the coefficients. Jackknife method produced the almost same coefficients as those of the uncorrected model but the lower t-ratios. These results indicate that the coefficients of the uncorrected method are accurate but that their significance are somewhat overestimated. In this study. 1 concluded that the repeated measurement Problem did exist in our data, but that it did not affect the model estimation results significantly. It is recommended that such a test should become a standard procedure. If it turns out that the analysis based on the simple uncorrected method are influenced by the repeated measurement problem. it should be corrected.

Bootstrap and Delete-d Jackknife Confidence Intervals for Parameters of an Exponential Distribution

  • Kang, Suk-Bok;Cho, Young-Suk
    • Journal of the Korean Data and Information Science Society
    • /
    • v.8 no.1
    • /
    • pp.59-70
    • /
    • 1997
  • We introduce several estimators of the location and the scale parameters of the two-parameter exponential distribution, and then compare these estimators by the mean square error (MSE). Using the parametric bootstrap estimators and the delete-d jackknife, we obtain the bootstrap and the delete-d jackknife confidence intervals for the location and the scale parameters and compare the bootstrap confidence intervals with the delete-d jackknife confidence intervals by length and coverage probability through Monte Carlo method.

  • PDF

Jackknife Variance Estimation under Imputation for Nonrandom Nonresponse with Follow-ups

  • Park, Jinwoo
    • Journal of the Korean Statistical Society
    • /
    • v.29 no.4
    • /
    • pp.385-394
    • /
    • 2000
  • Jackknife variance estimation based on adjusted imputed values when nonresponse is nonrandom and follow-up data are available for a subsample of nonrespondents is provided. Both hot-deck and ratio imputation method are considered as imputation method. The performance of the proposed variance estimator under nonrandom response mechanism is investigated through numerical simulation.

  • PDF

Application of Jackknife Method for Determination of Representative Probability Distribution of Annual Maximum Rainfall (연최대강우량의 대표확률분포형 결정을 위한 Jackknife기법의 적용)

  • Lee, Jae-Joon;Lee, Sang-Won;Kwak, Chang-Jae
    • Journal of Korea Water Resources Association
    • /
    • v.42 no.10
    • /
    • pp.857-866
    • /
    • 2009
  • In this study, basic data is consisted annual maximum rainfall at 56 stations that has the rainfall records more than 30years in Korea. The 14 probability distributions which has been widely used in hydrologic frequency analysis are applied to the basic data. The method of moments, method of maximum likelihood and probability weighted moments method are used to estimate the parameters. And 4-tests (chi-square test, Kolmogorov-Smirnov test, Cramer von Mises test, probability plot correlation coefficient (PPCC) test) are used to determine the goodness of fit of probability distributions. This study emphasizes the necessity for considering the variability of the estimate of T-year event in hydrologic frequency analysis and proposes a framework for evaluating probability distribution models. The variability (or estimation error) of T-year event is used as a criterion for model evaluation as well as three goodness of fit criteria (SLSC, MLL, and AIC) in the framework. The Jackknife method plays a important role in estimating the variability. For the annual maxima of rainfall at 56 stations, the Gumble distribution is regarded as the best one among probability distribution models with two or three parameters.

EFFICIENT REPLICATION VARIANCE ESTIMATION FOR TWO-PHASE SAMPLING

  • Kim, Jae-Kwang;Sitter, Randy
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2002.11a
    • /
    • pp.327-332
    • /
    • 2002
  • Variance estimation for the regression estimator for a two-phase sample is investigated. A replication variance estimator with number of replicates equal to or slightly larger than the size of the second-phase sample is developed. In these cases, the proposed method is asymptotically equivalent to the full jackknife, but uses smaller number of replications.

  • PDF

Comparison of EM with Jackknife Standard Errors and Multiple Imputation Standard Errors

  • Kang, Shin-Soo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.4
    • /
    • pp.1079-1086
    • /
    • 2005
  • Most discussions of single imputation methods and the EM algorithm concern point estimation of population quantities with missing values. A second concern is how to get standard errors of the point estimates obtained from the filled-in data by single imputation methods and EM algorithm. Now we focus on how to estimate standard errors with incorporating the additional uncertainty due to nonresponse. There are some approaches to account for the additional uncertainty. The general two possible approaches are considered. One is the jackknife method of resampling methods. The other is multiple imputation(MI). These two approaches are reviewed and compared through simulation studies.

  • PDF

Analysis of Nested Case-Control Study Designs: Revisiting the Inverse Probability Weighting Method

  • Kim, Ryung S.
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.6
    • /
    • pp.455-466
    • /
    • 2013
  • In nested case-control studies, the most common way to make inference under a proportional hazards model is the conditional logistic approach of Thomas (1977). Inclusion probability methods are more efficient than the conditional logistic approach of Thomas; however, the epidemiology research community has not accepted the methods as a replacement of the Thomas' method. This paper promotes the inverse probability weighting method originally proposed by Samuelsen (1997) in combination with an approximate jackknife standard error that can be easily computed using existing software. Simulation studies demonstrate that this approach yields valid type 1 errors and greater powers than the conditional logistic approach in nested case-control designs across various sample sizes and magnitudes of the hazard ratios. A generalization of the method is also made to incorporate additional matching and the stratified Cox model. The proposed method is illustrated with data from a cohort of children with Wilm's tumor to study the association between histological signatures and relapses.

Check for regression coefficient using jackknife and bootstrap methods in clinical data (잭나이프 및 붓스트랩 방법을 이용한 임상자료의 회귀계수 타당성 확인)

  • Sohn, Ki-Cheul;Shin, Im-Hee
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.4
    • /
    • pp.643-648
    • /
    • 2012
  • There are lots of analysis to determine the relation between dependent variable and explanatory variables. Often the regression analysis is used to do this, and we can analyze the how much the explanatory variable can be related with dependent variable and how much the regression model can explain the data. But the validation check of regression model is usually determined by coefficient of determination. We should check the validation of regression coefficient with different methods. This paper introduces the method for validation check the regression coefficient using the jackknife regression and bootstrap regression in clinical data.

Jackknife Estimator of Logistic Transformation from Truncated Data

  • Lee, Won-Hyung
    • Journal of the military operations research society of Korea
    • /
    • v.6 no.2
    • /
    • pp.129-149
    • /
    • 1980
  • In medical follow-up, equipment lifetesting, various military situations, and other fields, one often desires to calculate survival probability as a function of time, p(t). If the observer is able to record the time of occurrence of the event of interest (called a 'death'), then an empirical, non-parametric estimate may simply by obtained from the fraction of survivors after various elapsed times. The estimation is more complicated when the data are truncated, i.e., when the observer loses track of some individuals before death occurs. The product-limit method of Kaplan and Meier is one way of estimating p(t) when the mechanism causing truncation is independent of the mechanism causing death. This paper proposes jackknife estimators of logistic trans-formation and compares it to the product-limit method. A computer simulation is used to generate the times of death and truncation from a variety of assumed distributions.

  • PDF

A Combined Method Compensating for Wave Nonresponse

  • Park, Jinwoo
    • Journal of the Korean Statistical Society
    • /
    • v.31 no.4
    • /
    • pp.469-482
    • /
    • 2002
  • This paper suggests a new method of compensating for wave nonresponse in panel survey, which combines weighting adjustment and imputation. By deleting less frequent nonresponse patterns, we can get simplicity. A new mean estimator under the new combining method is provided and a limited simulation study employing a real data is conducted.