• Title/Summary/Keyword: Statistical Procedures

Search Result 694, Processing Time 0.02 seconds

Nonparametric Test Procedures for Change Point Problems in Scale Parameter

  • Cho, Wan-Hyun;Lee, Jae-Chang
    • Journal of the Korean Statistical Society
    • /
    • v.19 no.2
    • /
    • pp.128-138
    • /
    • 1990
  • In this paper we study the properties of nonparametric tests for testing the null hypothesis of no changes against one sided and two sideds alternatives in scale parameter at unknown point. We first propose two types of nonparametric tests based on linear rank statistics and rank-like statistics, respectively. For these statistics, we drive the asymptotic distributions under the null and contiguous alternatives. The main theoreticla tools used for derivation are the stochastic process representation of the test staistic and the Brownian bridge approximation. We evaluate the Pitman efficiencies of the test for the contiguous alternatives, and also compute empirical power by Monte Carlo simulation.

  • PDF

Bayesian Changepoints Detection for the Power Law Process with Binary Segmentation Procedures

  • Kim Hyunsoo;Kim Seong W.;Jang Hakjin
    • Communications for Statistical Applications and Methods
    • /
    • v.12 no.2
    • /
    • pp.483-496
    • /
    • 2005
  • We consider the power law process which is assumed to have multiple changepoints. We propose a binary segmentation procedure for locating all existing changepoints. We select one model between the no-changepoints model and the single changepoint model by the Bayes factor. We repeat this procedure until no more changepoints are found. Then we carry out a multiple test based on the Bayes factor through the intrinsic priors of Berger and Pericchi (1996) to investigate the system behaviour of failure times. We demonstrate our procedure with a real dataset and some simulated datasets.

Identification of Regression Outliers Based on Clustering of LMS-residual Plots

  • Kim, Bu-Yong;Oh, Mi-Hyun
    • Communications for Statistical Applications and Methods
    • /
    • v.11 no.3
    • /
    • pp.485-494
    • /
    • 2004
  • An algorithm is proposed to identify multiple outliers in linear regression. It is based on the clustering of residuals from the least median of squares estimation. A cut-height criterion for the hierarchical cluster tree is suggested, which yields the optimal clustering of the regression outliers. Comparisons of the effectiveness of the procedures are performed on the basis of the classic data and artificial data sets, and it is shown that the proposed algorithm is superior to the one that is based on the least squares estimation. In particular, the algorithm deals very well with the masking and swamping effects while the other does not.

Resampling-based Test of Hypothesis in L1-Regression

  • Kim, Bu-Yong
    • Communications for Statistical Applications and Methods
    • /
    • v.11 no.3
    • /
    • pp.643-655
    • /
    • 2004
  • L$_1$-estimator in the linear regression model is widely recognized to have superior robustness in the presence of vertical outliers. While the L$_1$-estimation procedures and algorithms have been developed quite well, less progress has been made with the hypothesis test in the multiple L$_1$-regression. This article suggests computer-intensive resampling approaches, jackknife and bootstrap methods, to estimating the variance of L$_1$-estimator and the scale parameter that are required to compute the test statistics. Monte Carlo simulation studies are performed to measure the power of tests in small samples. The simulation results indicate that bootstrap estimation method is the most powerful one when it is employed to the likelihood ratio test.

An Evaluation of the Accuracy of Maximum Likelihood Procedure for Estimating HIV Infectivity

  • Um, Yonghwan;Haber, Michael-J
    • Communications for Statistical Applications and Methods
    • /
    • v.6 no.3
    • /
    • pp.957-966
    • /
    • 1999
  • We evaluate the accuacy and precision of maximum likelihood estimation procedures for infectivity of HIV in partner studies. This is achieved by applying the oricedyre typothetical samples generated by computer. One hundred samples were generated with various combinations of parameters. The estimation procedure was found to be quite accurate. in addition it was found that the power of the test for equality of infectivities for two types of contact depends on sample size and length of observation period but not on the number of observations made on each subject. Tests based on a model for the infectivity had higher power than standard methods for comparing proportions.

  • PDF

A Comparison Study of the Test for Right Censored and Grouped Data

  • Park, Hyo-Il
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.4
    • /
    • pp.313-320
    • /
    • 2015
  • In this research, we compare the efficiency of two test procedures proposed by Prentice and Gloeckler (1978) and Park and Hong (2009) for grouped data with possible right censored observations. Both test statistics were derived using the likelihood ratio principle, but under different semi-parametric models. We review the two statistics with asymptotic normality and consider obtaining empirical powers through a simulation study. The simulation study considers two types of models the location translation model and the scale model. We discuss some interesting features related to the grouped data and obtain null distribution functions with a re-sampling method. Finally we indicate topics for future research.

Test procedures for the mean and variance simultaneously under normality

  • Park, Hyo-Il
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.6
    • /
    • pp.563-574
    • /
    • 2016
  • In this study, we propose several simultaneous tests to detect the difference between means and variances for the two-sample problem when the underlying distribution is normal. For this, we apply the likelihood ratio principle and propose a likelihood ratio test. We then consider a union-intersection test after identifying the likelihood statistic, a product of two individual likelihood statistics, to test the individual sub-null hypotheses. By noting that the union-intersection test can be considered a simultaneous test with combination function, also we propose simultaneous tests with combination functions to combine individual tests for each sub-null hypothesis. We apply the permutation principle to obtain the null distributions. We then provide an example to illustrate our proposed procedure and compare the efficiency among the proposed tests through a simulation study. We discuss some interesting features related to the simultaneous test as concluding remarks. Finally we show the expression of the likelihood ratio statistic with a product of two individual likelihood ratio statistics.

Pre-service Teachers' Conceptualization of Arithmetic Mean (산술 평균에 대한 예비교사들의 개념화 분석)

  • Joo, Hong-Yun;Kim, Kyung-Mi;Whang, Woo-Hyung
    • The Mathematical Education
    • /
    • v.49 no.2
    • /
    • pp.199-221
    • /
    • 2010
  • The purpose of the study were to investigate how secondary pre-service teachers conceptualize arithmetic mean and how their conceptualization was formed for solving the problems involving arithmetic mean. As a result, pre-service teachers' conceptualization of arithmetic mean was categorized into conceptualization by "mathematical knowledge(mathematical procedural knowledge, mathematical conceptual knowledge)", "analog knowledge(fair-share, center-of-balance)", and "statistical knowledge". Most pre-service teachers conceptualized the arithmetic mean using mathematical procedural knowledge which involves the rules, algorithm, and procedures of calculating the mean. There were a few pre-service teachers who used analog or statistical knowledge to conceptualize the arithmetic mean, respectively. Finally, we identified the relationship between problem types and conceptualization of arithmetic mean.

Statistical analysis of metagenomics data

  • Calle, M. Luz
    • Genomics & Informatics
    • /
    • v.17 no.1
    • /
    • pp.6.1-6.9
    • /
    • 2019
  • Understanding the role of the microbiome in human health and how it can be modulated is becoming increasingly relevant for preventive medicine and for the medical management of chronic diseases. The development of high-throughput sequencing technologies has boosted microbiome research through the study of microbial genomes and allowing a more precise quantification of microbiome abundances and function. Microbiome data analysis is challenging because it involves high-dimensional structured multivariate sparse data and because of its compositional nature. In this review we outline some of the procedures that are most commonly used for microbiome analysis and that are implemented in R packages. We place particular emphasis on the compositional structure of microbiome data. We describe the principles of compositional data analysis and distinguish between standard methods and those that fit into compositional data analysis.

Least quantile squares method for the detection of outliers

  • Seo, Han Son;Yoon, Min
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.1
    • /
    • pp.81-88
    • /
    • 2021
  • k-least quantile of squares (k-LQS) estimates are a generalization of least median of squares (LMS) estimates. They have not been used as much as LMS because their breakdown points become small as k increases. But if the size of outliers is assumed to be fixed LQS estimates yield a good fit to the majority of data and residuals calculated from LQS estimates can be a reliable tool to detect outliers. We propose to use LQS estimates for separating a clean set from the data in the context of outlyingness of the cases. Three procedures are suggested for the identification of outliers using LQS estimates. Examples are provided to illustrate the methods. A Monte Carlo study show that proposed methods are effective.