• Title/Summary/Keyword: statistical check

Search Result 374, Processing Time 0.025 seconds

A Study on Methods of Quality Check for Digital Basemaps using Statistical Methods for the Quality Control (통계적 품질관리기법을 도입한 수치지도의 검수방법에 관한 연구)

  • 김병국;서현덕
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.17 no.1
    • /
    • pp.79-86
    • /
    • 1999
  • In this study, we investigated methods of quality check for digital basemaps and proposed effective methods of quality check. We used new statistical methods for quality control in order to carry out quality check for digital basemaps. We proposed 2-stage complete sampling and 2-stage cluster sampling method to improve present statistical methods of quality check(1-stage complete sampling method). We estimated error rate and number of omitted objects using simulated data about all delivered digital basemaps and estimated variances about it. We could determine confidence interval about error rate and number of omitted objects.

  • PDF

A Study on the Validity of the Statistical Collection and Analysis in Gwangju and Chonnam (통계자료의 수집 및 분석의 타당성에 관한 연구- 광주,전남지역을 중심으로 -)

  • 이화영
    • The Korean Journal of Applied Statistics
    • /
    • v.6 no.2
    • /
    • pp.443-452
    • /
    • 1993
  • A check list which includes the items that are to be considered in the process of the statistical data collection and analysis by non-scientific organizations is proposed. Based on the suggested check list, the output resulting from the statistical survey conducted by private organizations, banks, organs of expression and enterprises in Gwangju and Chonnam are examined about the validity of data collection and statistical analysis.

  • PDF

A Study on the Statistical Model Validation using Response-adaptive Experimental Design (반응적응 시험설계법을 이용하는 통계적 해석모델 검증 기법 연구)

  • Jung, Byung Chang;Huh, Young-Chul;Moon, Seok-Jun;Kim, Young Joong
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2014.10a
    • /
    • pp.347-349
    • /
    • 2014
  • Model verification and validation (V&V) is a current research topic to build computational models with high predictive capability by addressing the general concepts, processes and statistical techniques. The hypothesis test for validity check is one of the model validation techniques and gives a guideline to evaluate the validity of a computational model when limited experimental data only exist due to restricted test resources (e.g., time and budget). The hypothesis test for validity check mainly employ Type I error, the risk of rejecting the valid computational model, for the validity evaluation since quantification of Type II error is not feasible for model validation. However, Type II error, the risk of accepting invalid computational model, should be importantly considered for an engineered products having high risk on predicted results. This paper proposes a technique named as the response-adaptive experimental design to reduce Type II error by adaptively designing experimental conditions for the validation experiment. A tire tread block problem and a numerical example are employed to show the effectiveness of the response-adaptive experimental design for the validity evaluation.

  • PDF

Development of Measurement Assurance Test Procedures between Calibrations (계기 검교정간의 보증시험 절차의 개발)

  • Yum, Bong-Jin;Cho, Jae-Gyeun;Lee, Dong-Wha
    • IE interfaces
    • /
    • v.6 no.1
    • /
    • pp.55-65
    • /
    • 1993
  • A nonstandard instrument used in the filed frequently becomes out-of-calibration due to environmental noise, misuse, aging, etc. A substantial amount of loss may result if such nonstandard instrument is used to check product quality and performance. Traditional periodic calibration at the calibration center is not capable of detecting out-of-calibration status while the instrument is in use, and therefore, statistical methods need to be developed to check the status of a nonstandard instrument in the field between calibrations. Developed in this paper is a unified measurement assurance model in which statistical calibration at the calibration center and measurement assurance test in the filed are combined. We developed statistical procedures to detect changes in precision and in the coefficients of the calibration equation. Futher, computational experiments are conducted to evaluate how the power of test varies with respect to the parameters involved. Based upon the computational results we suggest procedures for designing effective measurement assurance tests.

  • PDF

Development of a Quality Check Algorithm for the WISE Pulsed Doppler Wind Lidar (WISE 펄스 도플러 윈드라이다 품질관리 알고리즘 개발)

  • Park, Moon-Soo;Choi, Min-Hyeok
    • Atmosphere
    • /
    • v.26 no.3
    • /
    • pp.461-471
    • /
    • 2016
  • A quality check algorithm for the Weather Information Service Engine pulsed Doppler wind lidar is developed from a view point of spatial and temporal consistencies of observed wind speed. Threshold values for quality check are determined by statistical analysis on the standard deviation of 3-component of wind speed obtained by a wind lidar, and the vertical gradient of horizontal wind speed obtained by a radiosonde system. The algorithm includes carrier-to-noise ratio (CNR) check, data availability check, and vertical gradient of horizontal wind speed check. That is, data sets whose CNR is less than -29 dB, data availability is less than 90%, or vertical gradient of horizontal wind speed is less than $-0.028s^{-1}$ or larger than $0.032s^{-1}$ are classified as 'doubtful', and flagged. The developed quality check algorithm is applied to data obtained at Bucheon station for the period from 1 to 30 September 2015. It is found that the number of 'doubtful' data shows maxima around 2000 m high, but the ratio of 'doubtful' to height-total data increases with increasing height due to atmospheric boundary height, cloud, or rainfall, etc. It is also found that the quality check by data availability is more effective than those by carrier to noise ratio or vertical gradient of horizontal wind speed to remove an erroneous noise data.

Semiparametric Inference for a Multistate Stochastic Survival Model

  • Sung Chil Yeo
    • Communications for Statistical Applications and Methods
    • /
    • v.5 no.1
    • /
    • pp.239-263
    • /
    • 1998
  • In this paper, we consider a multistate survival model which incorporates covariates and contains two illness states and two death states. The underlying stochastic process is assumed to follow nonhomogeneous Markov process. The estimates of survival, transition and competing risks probabilities are given via the methods of partial likelihood and nonparametric maximum likelihood. Our discussion is based on the statistical theory of counting process. An illustration is given to the data of patients in a heart transplant program. The goodness of fit procedures are also discussed to check the adequacy of the model.

  • PDF

Exploratory Data Analysis for microarray experiments with replicates

  • Lee, Eun-Kyung;Yi, Sung-Gon;Park, Tae-Sung
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2005.11a
    • /
    • pp.37-41
    • /
    • 2005
  • Exploratory data analysis(EDA) is the initial stage of data analysis and provides a useful overview about the whole microarray experiment. If the experiments are replicated, the analyst should check the quality and reliability of microarray data within same experimental condition before the deeper statistical analysis. We shows EDA method focusing on the quality and reproducibility for replicates.

  • PDF

Release of Microdata and Statistical Disclosure Control Techniques (마이크로데이터 제공과 통계적 노출조절기법)

  • Kim, Kyu-Seong
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.1
    • /
    • pp.1-11
    • /
    • 2009
  • When micro data are released to users, record by record data are disclosed and the disclosure risk of respondent's information is inevitable. Statistical disclosure control techniques are statistical tools to reduce the risk of disclosure as well as to increase data utility in case of data release. In this paper, we reviewed the concept of disclosure and disclosure risk as well as statistical disclosure control techniques and then investigated selection strategies of a statistical disclosure control technique related with data utility. The risk-utility frontier map method was illustrated as an example. Finally, we listed some check points at each step when microdata are released.

A summary-concept based analysis on the representative values and the measures of spread with the 9th grade Korean mathematics textbook (중학교 3학년 수학교과서 통계단원에 나타난 요약개념 분석)

  • Lee, Young-Ha;Lee, Eun-Hee
    • The Mathematical Education
    • /
    • v.50 no.4
    • /
    • pp.489-505
    • /
    • 2011
  • This study is an analysis on the focus of textbooks regarding the statistical chapters of "measures of representative(central tendency) and of the spread". Applying the summary-concept criteria of Juhyeon Nam(2007), 4 kinds of aspect of the chapter; (1) definition and its teleological validity of the measures of representative, (2) definition and practical value of the measures of spread (3) distributional form on the measures of representative and of spread (4) location and scale preservation or invariance of the measures of representative and of spread were observed. On the measures of representative, some definitions were insufficient to check the teleological validity of the measure. Most definitions of the measure of spread were based on the practical view points but no preparation for the future statistical inferences were found even by implication. Some books mention about the measures of representative and of spread for distributions, but we could not find any comments on the correspondence between the sample mean and the expectation of a distribution or population mean. However it is stimulant that some books check the validity of corresponding measures with the location and scale preservation or invariant property, that were not found in the previous curriculum.

Developing the Quality Assessment Indicators for the National Processing Statistics of Korea

  • Kim, Soo-Taek;Jeong, Ki-Ho;Kim, Seol-Hee
    • Communications for Statistical Applications and Methods
    • /
    • v.14 no.3
    • /
    • pp.649-665
    • /
    • 2007
  • The improvement of quality is a continuous process and one of the main objectives of the Statistical Strategy launched by the Korea National Statistical Office (KNSO) is the enhancement of the quality of Korea national statistics. In this paper, we define the processing statistic, classify the Korea national processing statistics, and develop the quality indicators and check list for assessing the national processing statistics of Korea. During its development, the indicators has been discussed with the processing statistic managers of the KNSO and the checklist tested in a pilot study covering a variety of processing statistic areas.