• Title/Summary/Keyword: Sampling and analysis error

Search Result 197, Processing Time 0.025 seconds

ERROR ANALYSIS ASSOCIATED WITH UNIFORM HERMITE INTERPOLATIONS OF BANDLIMITED FUNCTIONS

  • Annaby, Mahmoud H.;Asharabi, Rashad M.
    • Journal of the Korean Mathematical Society
    • /
    • v.47 no.6
    • /
    • pp.1299-1316
    • /
    • 2010
  • We derive estimates for the truncation, amplitude and jitter type errors associated with Hermite-type interpolations at equidistant nodes of functions in Paley-Wiener spaces. We give pointwise and uniform estimates. Some examples and comparisons which indicate that applying Hermite interpolations would improve the methods that use the classical sampling theorem are given.

Economic-Statistical Design of Double Sampling T2 Control Chart under Weibull Failure Model (와이블 고장모형 하에서의 이중샘플링 T2 관리도의 경제적-통계적 설계 (이중샘플링 T2 관리도의 경제적-통계적 설계))

  • Hong, Seong-Ok;Lee, Min-Koo;Lee, Jooho
    • Journal of Korean Society for Quality Management
    • /
    • v.43 no.4
    • /
    • pp.471-488
    • /
    • 2015
  • Purpose: Double sampling $T^2$ chart is a useful tool for detecting a relatively small shift in process mean when the process is controlled by multiple variables. This paper finds the optimal design of the double sampling $T^2$ chart in both economical and statistical sense under Weibull failure model. Methods: The expected cost function is mathematically derived using recursive equation approach. The optimal designs are found using a genetic algorithm for numerical examples and compared to those of single sampling $T^2$ chart. Sensitivity analysis is performed to see the parameter effects. Results: The proposed design outperforms the optimal design of the single sampling $T^2$ chart in terms of the expected cost per unit time and Type-I error rate for all the numerical examples considered. Conclusion: Double sampling $T^2$ chart can be designed to satisfy both economic and statistical requirements under Weibull failure model and the resulting design is better than the single sampling counterpart.

Error Analysis Caused by Using the Dftin Numerical Evaluation of Rayleigh's Integral (레일리 인테그랄의 수치해석상 오차에 대한 이론적 고찰)

  • Kim, Sun-I.
    • Journal of Biomedical Engineering Research
    • /
    • v.10 no.3
    • /
    • pp.323-330
    • /
    • 1989
  • Large bias errors which occur during a numerical evaluation of the Rayleigh's integral is not due to the replicated source problem but due to the coincidence of singularities of the Green's function and the sampling points in Fourier domain. We found that there is no replicated source problem in evaluating the Rayleigh's integral numerically by the reason of the periodic assumption of the input sequence in Dn or by the periodic sampling of the Green's function in the Fourier domain. The wrap around error is not due to an overlap of the individual adjacent sources but berallse of the undersampling of the Green's function in the frequency domain. The replicated and overlApped one is inverse Fourier transformed Green's function rather than the source function.

  • PDF

Anomaly Detection In Real Power Plant Vibration Data by MSCRED Base Model Improved By Subset Sampling Validation (Subset 샘플링 검증 기법을 활용한 MSCRED 모델 기반 발전소 진동 데이터의 이상 진단)

  • Hong, Su-Woong;Kwon, Jang-Woo
    • Journal of Convergence for Information Technology
    • /
    • v.12 no.1
    • /
    • pp.31-38
    • /
    • 2022
  • This paper applies an expert independent unsupervised neural network learning-based multivariate time series data analysis model, MSCRED(Multi-Scale Convolutional Recurrent Encoder-Decoder), and to overcome the limitation, because the MCRED is based on Auto-encoder model, that train data must not to be contaminated, by using learning data sampling technique, called Subset Sampling Validation. By using the vibration data of power plant equipment that has been labeled, the classification performance of MSCRED is evaluated with the Anomaly Score in many cases, 1) the abnormal data is mixed with the training data 2) when the abnormal data is removed from the training data in case 1. Through this, this paper presents an expert-independent anomaly diagnosis framework that is strong against error data, and presents a concise and accurate solution in various fields of multivariate time series data.

Corresponding between Error Probabilities and Bayesian Wrong Decision Lasses in Flexible Two-stage Plans

  • Ko, Seoung-gon
    • Journal of the Korean Statistical Society
    • /
    • v.29 no.4
    • /
    • pp.435-441
    • /
    • 2000
  • Ko(1998, 1999) proposed certain flexible two-stage plans that could be served as one-step interim analysis in on-going clinical trials. The proposed Plans are optimal simultaneously in both a Bayes and a Neyman-Pearson sense. The Neyman-Pearson interpretation is that average expected sample size is being minimized, subject just to the two overall error rates $\alpha$ and $\beta$, respectively of first and second kind. The Bayes interpretation is that Bayes risk, involving both sampling cost and wrong decision losses, is being minimized. An example of this correspondence are given by using a binomial setting.

  • PDF

Quality Assurance and Quality Control method for Volatile Organic Compounds measured in the Photochemical Assessment Monitoring Station (광화학측정망에서 측정한 휘발성유기화합물의 정도관리 방법)

  • Shin, Hye-Jung;Kim, Jong-Choon;Kim, Yong-Pyo
    • Particle and aerosol research
    • /
    • v.7 no.1
    • /
    • pp.31-44
    • /
    • 2011
  • The hourly volatile organic compounds(VOCs) concentrations between 2005 and 2008 at Bulgwang photochemical assessment monitoring station were investigated to establish a method for quality assurance and quality control(QA/QC) procedure. Systematic error, erratic error, and random error, which was manifested by outlier and highly fluctuated data, were checked and removed. About 17.3% of the raw data were excluded according to the proposed QA/QC procedure. After QA/QC, relative standard deviation for representing 15 species concentrations decreased from 94.7-548.0% to 63.4-125.8%, implying the QA/QC procedure is proper. For further evaluation about the adequacy of QA/QC procedure, principal components analysis(PCA) was carried out. When the data after QA/QC procedure was used for PCA, the extracted principal components were different from the result from the raw data and could logically explain the major emission sources(gasoline vapor, vehicle exhaust, and solvent usage). The QA/QC procedure based on the concept of errors is inferred to proper to be applied on VOCs. However, an additional QA/QC step considering the relationship between species in the atmosphere needs to be further considered.

Structural health monitoring for pinching structures via hysteretic mechanics models

  • Rabiepour, Mohammad;Zhou, Cong;Chase, James G.;Rodgers, Geoffrey W.;Xu, Chao
    • Structural Engineering and Mechanics
    • /
    • v.82 no.2
    • /
    • pp.245-258
    • /
    • 2022
  • Many Structural Health Monitoring (SHM) methods have been proposed for structural damage diagnosis and prognosis. However, SHM for pinched hysteretic structures can be problematic due to the high level of nonlinearity. The model-free hysteresis loop analysis (HLA) has displayed notable robustness and accuracy in identifying damage for full-scaled and scaled test buildings. In this paper, the performance of HLA is compared with seven other SHM methods in identifying lateral elastic stiffness for a six-story numerical building with highly nonlinear pinching behavior. Two successive earthquakes are employed to compare the accuracy and consistency of methods within and between events. Robustness is assessed across sampling rates 50-1000 Hz in noise-free condition and then assessed with 10% root mean square (RMS) noise added to responses at 250 Hz sampling rate. Results confirm HLA is the most robust method to sampling rate and noise. HLA preserves high accuracy even when the sampling rate drops to 50 Hz, where the performance of other methods deteriorates considerably. In noisy conditions, the maximum absolute estimation error is less than 4% for HLA. The overall results show HLA has high robustness and accuracy for an extremely nonlinear, but realistic case compared to a range of leading and recent model-based and model-free methods.

Power Analysis for Tests Adjusted for Measurement Error

  • Heo, Sun-Yeong;Eltinge, John L.
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2003.05a
    • /
    • pp.1-14
    • /
    • 2003
  • In man cases, the measurement error variances may be functions of the unknown true values or related covariate. In some cases, the measurement error variances increase in proportion to the value of predictor. This paper develops estimators of the parameters of a linear measurement error variance function under stratified multistage random sampling design and additional conditions. Also, this paper evaluates and compares the power of an asymptotically unbiased test with that of an asymptotically biased test. The proposed method are applied to blood sample measurements from the U.S. Third National Health and Nutrition Examination Survey(NHANES III)

  • PDF

Uncertainty Analysis on Wind Speed Profile Measurements of LIDAR by Applying SODAR Measurements as a Virtual True Value (가상적 참값으로써 소다 측정자료를 적용한 라이다에 의한 풍속연직분포 측정의 불확도 분석)

  • Kim, Hyun-Goo;Choi, Ji-Hwi
    • Journal of the Korean Solar Energy Society
    • /
    • v.30 no.4
    • /
    • pp.79-85
    • /
    • 2010
  • The uncertainty in WindCube LIDAR measurements, which are specific to wind profiling at less than 200m above ground levelin wind resource assessments, was analyzed focusing on the error caused by its volume sampling principle. A two-month SODAR measurement campaign conducted in an urban environment was adopted as the reference wind profile assuming that various atmospheric boundary layer shapes had been captured. The measurement error of LIDAR at a height z was defined as the difference in the wind speeds between the SODAR reference data, which was assumed to be a virtually true value, and the numerically averaged wind speed for a sampling volume height interval of $z{\pm}12.5m$. The pattern of uncertainty in the measurement was found to have a maximum in the lower part of the atmospheric boundary layer and decreased with increasing height. It was also found that the relative standard deviations of the wind speed error ratios were 6.98, 2.70 and 1.12% at the heights of 50, 100 and 150m above ground level, respectively.

A Study on Measurement Method of Optical Path Error in Arrayed Waveguide Grating Router (광도파로열 격자 라우터의 경로오차 측정 방법에 관한 연구)

  • Park, Jae-Sung;Chung, Young-Chul;Mun, Seong-Uk
    • Proceedings of the KIEE Conference
    • /
    • 1999.07e
    • /
    • pp.2431-2433
    • /
    • 1999
  • Phase errors of arrayed waveguide degrade the performance of AWG router, especially for dense WDM system. So it is necessary to measure the phase error and to compensate. The analysis method of the interference signal from the low coherence interferometer to measure the path length difference phase error is studied. The interference signal generated assuming the intentional path length difference errors of 0.1$\sim$0.4${\mu}m$ are analyzed and the results show that the path length difference phase error of ${\Delta}L$ within ${\pm}14^{\circ}$ of sampling phase error can be accurately measured.

  • PDF