• Title/Summary/Keyword: Data normality

Search Result 323, Processing Time 0.023 seconds

Studies on the Computer Programming of Statistical Methods (II) (품질관리기법(品質管理技法)의 전산화(電算化)에 관(關)한 연구(硏究)(II))

  • Jeong, Su-Il
    • Journal of Korean Society for Quality Management
    • /
    • v.14 no.1
    • /
    • pp.19-25
    • /
    • 1986
  • This paper studies the computer programming of statistical methods. A few computer programs are developed for * computing the basic statistics and the coefficients of process capability for raw and grouped data * drawing the frequency table and histogram * goodness of fit testing for normality with the analyses for stratifications if necessary. A special emphasis is laid on the significant digits and rounding-off for the output. A running result appears in the Appendix for a hypothetical example.

  • PDF

A generalized Hollander-Proschan test for NBUE alternative based on U-statistics approach

  • Hassan, M.KH.
    • International Journal of Reliability and Applications
    • /
    • v.16 no.2
    • /
    • pp.113-122
    • /
    • 2015
  • In this paper, we introduce U-statistics approach to generalized Hollander-Proschan test for new better than used (NBUE) alternative. We prove, the proposed test is equivalent to test was introduced by Anis and Mitra (2011) and includes test was introduced by Hollander Proschan (1975). Also, the asymptotic properties are studied. The powers of our test are estimated. The Pitman asymptotic efficiencies of proposed test are also calculated. Finally, the test is applied to some real data.

New Kernel-Based Normality Recovery Method and Applications (새로운 커널 기반 정상 상태 복구 기법과 응용)

  • Gang Dae-Seong;Park Ju-Yeong
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.306-309
    • /
    • 2006
  • SVDD(support vector data description)는 one-class 서포트 벡터 학습 방법론 중 하나로 비정상 물체에서 정상 데이터를 구분하기 위해서 특징 공간에서 정의된 구를 이용하는 전략을 쓰는 방법론이다. 본 논문에서는 SVDD를 이용해서 노이즈가 섞인 비정상 데이터를 노이즈가 제거된 정상 데이터로 복원하는 방법에 대해서 논한다. 그리고 저해상도의 이미지를 고해상도의 이미지로 복원함으로써 본 논문의 방법론이 어떻게 실용적으로 적용되는지에 대해서 다룬다.

  • PDF

Asymptotic Relative Efficiency of t-test Following Transformations

  • Yeo, In-Kwon
    • Journal of the Korean Statistical Society
    • /
    • v.26 no.4
    • /
    • pp.467-476
    • /
    • 1997
  • The two-sample t-test is not expected to be optimal when the two samples are not drawn from normal populations. According to Box and Cox (1964), the transformation is estimated to enhance the normality of the tranformed data. We investigate the asymptotic relative efficiency of the ordinary t-test versus t-test applied transformation introduced by Yeo and Johnson (1997) under Pitman local alternatives. The theoretical and simulation studies show that two-sample t-test using transformed date gives higher power than ordinary t-test for location-shift models.

  • PDF

A Study on Distribution Based on the Normalized Sample Lorenz Curve

  • Suk-Bok kang;Cho, Young-Suk
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.1
    • /
    • pp.185-192
    • /
    • 2001
  • Using the Lorenz curve that is proved to be a powerful tool to measure the income inequality within a population of income receivers, we propose the normalized sample Lorenz curve for the goodness-of-fit test that is very important test in statistical analysis. For two hodgkin's disease data sets, we compare the Q-Q plot and the proposed normalized sample Lorenz curve.

  • PDF

BQUE, AOV and MINQUE procedure in Estimating Variance Components

  • Huh, Moon-Yul
    • Journal of the Korean Statistical Society
    • /
    • v.9 no.1
    • /
    • pp.97-108
    • /
    • 1980
  • Variance components model appears often in designing experiments including time series data analysis. This paper is investigating the properties of the various procedures in estimating variance components for the two-way random model without interaction under normality. In this age of computer-oriented computations, MINQUE is found to be quite practicla because of the robustness with respect to the design configurations and parameters. Also adjusted AOV type estimation procedure is found to yield superior results over the unadjusted one.

  • PDF

ROENTGENO-CEPHALOMETRIC ANALYSIS ON THE TWIN (쌍생아의 X-선 두개계측학적 연구)

  • Choi Hi Sup
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.2 no.1
    • /
    • pp.29-35
    • /
    • 1972
  • The purpose of this investigation can be sought for studying varients between twin by the cephalometric roentgenographic technics. The author have applied Down's, Bjork and Sakamoto's technic and measured in various angulations and length of cephalometric points. The results are as follows; 1. No significantly different data were found between twin. 2. There was no differences between normality and twin.

  • PDF

Testing NBUCA Class of Life Distribution Using U-Test

  • Al-Nachawati, H.
    • International Journal of Reliability and Applications
    • /
    • v.8 no.2
    • /
    • pp.125-135
    • /
    • 2007
  • In this paper, testing exponentiality against new better than used in convex average and denote by (NBUCA), or its dual (NWUCA) is investigated through the U-test. The percentiles of these tests are tabulated for samples sizes n = 5(1)40. The power estimates of the test are simulated for some commonly used distributions in reliability. Pitman's asymptotic efficiency of the test is calculated and compared. Data of 40 patients suffering from blood cancer disease (Leukemia) is considered as a practical application of the proposed test in the medical sciences.

  • PDF

EVAPORATION DATA STOCHASTIC GENERATION FOR KING FAHAD DAM LAKE IN BISHAH, SAUDI ARABIA

  • Abdulmohsen A. Al-Shaikh
    • Water Engineering Research
    • /
    • v.2 no.4
    • /
    • pp.209-218
    • /
    • 2001
  • Generation of evaporation data generally assists in planning, operation, and management of reservoirs and other water works. Annual and monthly evaporation series were generated for King Fahad Dam Lake in Bishah, Saudi Arabia. Data was gathered for period of 22 years. Tests of homogeneity and normality were conducted and results showed that data was homogeneous and normally distributed. For generating annual series, an Autoregressive first order model AR(1) was used and for monthly evaporation series method of fragments was used. Fifty replicates for annual series, and fifty replicates for each month series, each with 22 values length, were generated. Performance of the models was evaluated by comparing the statistical parameters of the generated series with those of the historical data. Annual and monthly models were found to be satisfactory in preserving the statistical parameters of the historical series. About 89% of the tested values of the considered parameters were within the assigned confidence limits

  • PDF

Kernel Estimation of Hazard Ratio Based on Censored Data

  • Choi, Myong-Hui;Lee, In-Suk;Song, Jae-Kee
    • Journal of the Korean Data and Information Science Society
    • /
    • v.12 no.2
    • /
    • pp.125-143
    • /
    • 2001
  • We, in this paper, propose a kernel estimator of hazard ratio with censored survival data. The uniform consistency and asymptotic normality of the proposed estimator are proved by using counting process approach. In order to assess the performance of the proposed estimator, we compare the kernel estimator with Cox estimator and the generalized rank estimators of hazard ratio in terms of MSE by Monte Carlo simulation. Two examples are illustrated for our results.

  • PDF