• Title/Summary/Keyword: Test Data

Search Result 36,174, Processing Time 0.055 seconds

Bayesian test for the differences of survival functions in multiple groups

  • Kim, Gwangsu
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.2
    • /
    • pp.115-127
    • /
    • 2017
  • This paper proposes a Bayesian test for the equivalence of survival functions in multiple groups. Proposed Bayesian test use the model of Cox's regression with time-varying coefficients. B-spline expansions are used for the time-varying coefficients, and the proposed test use only the partial likelihood, which provides easier computations. Various simulations of the proposed test and typical tests such as log-rank and Fleming and Harrington tests were conducted. This result shows that the proposed test is consistent as data size increase. Specifically, the power of the proposed test is high despite the existence of crossing hazards. The proposed test is based on a Bayesian approach, which is more flexible when used in multiple tests. The proposed test can therefore perform various tests simultaneously. Real data analysis of Larynx Cancer Data was conducted to assess applicability.

Efficient Test Data Compression and Low Power Scan Testing in SoCs

  • Jung, Jun-Mo;Chong, Jong-Wha
    • ETRI Journal
    • /
    • v.25 no.5
    • /
    • pp.321-327
    • /
    • 2003
  • Testing time and power consumption during the testing of SoCs are becoming increasingly important with an increasing volume of test data in intellectual property cores in SoCs. This paper presents a new algorithm to reduce the scan-in power and test data volume using a modified scan latch reordering algorithm. We apply a scan latch reordering technique to minimize the column hamming distance in scan vectors. During scan latch reordering, the don't-care inputs in the scan vectors are assigned for low power and high compression. Experimental results for ISCAS 89 benchmark circuits show that reduced test data and low power scan testing can be achieved in all cases.

  • PDF

Low Power Scan Chain Reordering Method with Limited Routing Congestion for Code-based Test Data Compression

  • Kim, Dooyoung;Ansari, M. Adil;Jung, Jihun;Park, Sungju
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.5
    • /
    • pp.582-594
    • /
    • 2016
  • Various test data compression techniques have been developed to reduce the test costs of system-on-a-chips. In this paper, a scan chain reordering algorithm for code-based test data compression techniques is proposed. Scan cells within an acceptable relocation distance are ranked to reduce the number of conflicts in all test patterns and rearranged by a positioning algorithm to minimize the routing overhead. The proposed method is demonstrated on ISCAS '89 benchmark circuits with their physical layout by using a 180 nm CMOS process library. Significant improvements are observed in compression ratio and test power consumption with minor routing overhead.

Low Power Scan Test Methodology Using Hybrid Adaptive Compression Algorithm (하이브리드 적응적 부호화 알고리즘을 이용한 저전력 스캔 테스트 방식)

  • Kim Yun-Hong;Jung Jun-Mo
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.4
    • /
    • pp.188-196
    • /
    • 2005
  • This paper presents a new test data compression and low power scan test method that can reduce test time and power consumption. A proposed method can reduce the scan-in power and test data volume using a modified scan cell reordering algorithm and hybrid adaptive encoding method. Hybrid test data compression method uses adaptively the Golomb codes and run-length codes according to length of runs in test data, which can reduce efficiently the test data volume compare to previous method. We apply a scan cell reordering technique to minimize the column hamming distance in scan vectors, which can reduce the scan-in power consumption and test data. Experimental results for ISCAS 89 benchmark circuits show that reduced test data and low power scan testing can be achieved in all cases. The proposed method showed an about a 17%-26% better compression ratio, 8%-22% better average power consumption and 13%-60% better peak power consumption than that of previous method.

  • PDF

Character Recognition Algorithm using Accumulation Mask

  • Yoo, Suk Won
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.2
    • /
    • pp.123-128
    • /
    • 2018
  • Learning data is composed of 100 characters with 10 different fonts, and test data is composed of 10 characters with a new font that is not used for the learning data. In order to consider the variety of learning data with several different fonts, 10 learning masks are constructed by accumulating pixel values of same characters with 10 different fonts. This process eliminates minute difference of characters with different fonts. After finding maximum values of learning masks, test data is expanded by multiplying these maximum values to the test data. The algorithm calculates sum of differences of two corresponding pixel values of the expanded test data and the learning masks. The learning mask with the smallest value among these 10 calculated sums is selected as the result of the recognition process for the test data. The proposed algorithm can recognize various types of fonts, and the learning data can be modified easily by adding a new font. Also, the recognition process is easy to understand, and the algorithm makes satisfactory results for character recognition.

Single Sample Grouping Methodology using Combining Data (Combining data를 적용한 단일 표본화 방법론 연구)

  • Back, Seungjun;Son, Youngkap;Lee, Seungyoung;Ahn, Mahnki;Kim, Cheongsig
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.17 no.5
    • /
    • pp.611-619
    • /
    • 2014
  • Combining similar data provides larger data sets through conducting test for homogeneity of several samples under various production processes or samples from different LOTs. The test for homogeneity has been applied to either variable or attribute data, and for variable data set physical homogeneity has been tested without consideration of the specification to the set. This paper proposes a method for test of homogeneity based on quality level through using both variable data and the specification. Quality-based test for homogeneity as a way of combining data is implemented by test for coefficient of variation in the proposed method. The method was verified through the application to the data set in open literature. And possibility to combine performance data for various types of thermal battery was discussed in order to estimate operation reliability.

Hypothesis Testing: Means and Proportions (평균과 비율 비교)

  • Pak, Son-Il;Lee, Young-Won
    • Journal of Veterinary Clinics
    • /
    • v.26 no.5
    • /
    • pp.401-407
    • /
    • 2009
  • In the previous article in this series we introduced the basic concepts for statistical analysis. The present review introduces hypothesis testing for continuous and categorical data for readers of the veterinary science literature. For the analysis of continuous data, we explained t-test to compare a single mean with a hypothesized value and the difference between two means from two independent samples or between two means arising from paired samples. When the data are categorical variables, the $x^2$ test for association and homogeneity, Fisher's exact test and Yates' continuity correction for small samples, and test for trend, in which at least one of the variables is ordinal is described, together with the worked examples. McNemar test for correlated proportions is also discussed. The topics covered may provide a basic understanding of different approaches for analyzing clinical data.

Extension of the Mantel-Haenszel test to bivariate interval censored data

  • Lee, Dong-Hyun;Kim, Yang-Jin
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.4
    • /
    • pp.403-411
    • /
    • 2022
  • This article presents an independence test between pairs of interval censored failure times. The Mantel-Haenszel test is commonly applied to test the independence between two categorical variables accompanied with a strata variable. Hsu and Prentice (1996) applied a Mantel-Haenszel test to the sequence of 2 × 2 tables formed at the grids which are composed of failure times. In this article, due to unknown failure times, the suitable grid points should be determined and the status of failure and at risk are estimated at those grid points. We also consider a weighted test statistic to bring a more powerful test. Simulation studies are performed to evaluate the power of test statistics under finite samples. The method is applied to analyze two real data sets, mastitis data from milk cows and an age-related eye disease study.

Bootstrap Median Tests for Right Censored Data

  • Park, Hyo-Il;Na, Jong-Hwa
    • Journal of the Korean Statistical Society
    • /
    • v.29 no.4
    • /
    • pp.423-433
    • /
    • 2000
  • In this paper, we consider applying the bootstrap method to the median test procedures for right censored data. For doing this, we show that the median test statistics can be represented by the differences of two sampler medians. Then we review to the re-sampling methods for censored dta and propose the test procedures under the location translation assumption and Behrens-Fisher problem. Also we compare our procedures with other re-sampling method, which is so-called permutation test through an example. Finally we show the validity of bootstrap median test procedure in the appendix.

  • PDF

The Rao-Robson Chi-Squared Test for Multivariate Structure

  • Park, Cheol-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.4
    • /
    • pp.1013-1021
    • /
    • 2003
  • Huffer and Park (2002) proposed a chi-squared test for multivariate structure. Their test detects the deviation of data from mutual independence or multivariate normality. We will compute the Rao-Robson chi-squared version of the test, which is easy to apply in practice since it has a limiting chi-squared distribution. We will provide a self-contained argument that it has a limiting chi-squared distribution. We study the accuracy in finite samples of the limiting distribution. We finally compare the power of our test with those of other popular normality tests in an application to a real data.

  • PDF