• Title/Summary/Keyword: random errors

Search Result 444, Processing Time 0.028 seconds

Discrete-Time Analysis of Throughput and Response Time for LAP Derivative Protocols under Markovian Block-Error Pattern (마르코프 오류모델 하에서의 LAP 계열 프로토콜들의 전송성능과 반응시간에 대한 이산-시간 해석)

  • Cho, Young-Jong;Choi, Dug-Kyoo
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2786-2800
    • /
    • 1997
  • In this paper, we investigate how well the channel memory (statistical dependence in the occurrence of transmission errors) can be used in the evaluation of widely used error control schemes. For this we assume a special case named as the simplest Markovian block-error pattern with two states, in which each block is classified into two classes of whether the block transmission is in error or not. We apply the derived pattern to the performance evaluation of the practical link-level procedures, LAPB/D/M with multi-reject options, and investigate both throughput and user-perceived response time behaviors on the discrete-time domain to determine how much the performance of error recovery action is improved under burst error condition. Through numerical examples, we show that the simplest Markovian block-error pattern tends to be superior in throughput and delay characteristics to the random error case. Also, instead of mean alone, we propose a new measure of the response time specified as mean plus two standard deviations 50 as to consider user-perceived worst cases, and show that it results in much greater sensitivity to parameter variations than does mean alone.

  • PDF

Detection of unexploded ordnance (UXO) using marine magnetic gradiometer data (해양 자력구배 탐사자료를 이용한 UXO 탐지)

  • Salem Ahmed;Hamada Toshio;Asahina Joseph Kiyoshi;Ushijima Keisuke
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.1
    • /
    • pp.97-103
    • /
    • 2005
  • Recent development of marine magnetic gradient systems, using arrays of sensors, has made it possible to survey large contaminated areas very quickly. However, underwater Unexploded Ordnances (UXO) can be moved by water currents. Because of this mobility, the cleanup process in such situations becomes dynamic rather than static. This implies that detection should occur in near real-time for successful remediation. Therefore, there is a need for a fast interpretation method to rapidly detect signatures of underwater objects in marine magnetic data. In this paper, we present a fast method for location and characterization of underwater UXOs. The approach utilises gradient interpretation techniques (analytic signal and Euler methods) to locate the objects precisely. Then, using an iterative linear least-squares technique, we obtain the magnetization characteristics of the sources. The approach was applied to a theoretical marine magnetic anomaly, with random errors, over a known source. We demonstrate the practical utility of the method using marine magnetic gradient data from Japan.

Extending the calibration between empirical influence function and sample influence function to t-statistic (경험적 영향함수와 표본영향함수 간 차이 보정의 t통계량으로의 확장)

  • Kang, Hyunseok;Kim, Honggie
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.6
    • /
    • pp.889-904
    • /
    • 2021
  • This study is a follow-up study of Kang and Kim (2020). In this study, we derive the sample influence functions of the t-statistic which were not directly derived in previous researches. Throughout these results, we both mathematically examine the relationship between the empirical influence function and the sample influence function, and consider a method to approximate the sample influence function by the empirical influence function. Also, the validity of the relationship between an approximated sample influence function and the empirical influence function is verified by a simulation of a random sample of size 300 from normal distribution. As a result of the simulation, the relationship between the sample influence function which is derived from the t-statistic and the empirical influence function, and the method of approximating the sample influence function through the empirical influence function were verified. This research has significance in proposing both a method which reduces errors in approximation of the empirical influence function and an effective and practical method that evolves from previous research which approximates the sample influence function directly through the empirical influence function by constant revision.

UC Model with ARIMA Trend and Forecasting U.S. GDP (ARIMA 추세의 비관측요인 모형과 미국 GDP에 대한 예측력)

  • Lee, Young Soo
    • International Area Studies Review
    • /
    • v.21 no.4
    • /
    • pp.159-172
    • /
    • 2017
  • In a typical trend-cycle decomposition of GDP, the trend component is usually assumed to follow a random walk process. This paper considers an ARIMA trend and assesses the validity of the ARIMA trend model. I construct univariate and bivariate unobserved-components(UC) models, allowing the ARIMA trend. Estimation results using U.S. data are favorable to the ARIMA trend models. I, also, compare the forecasting performance of the UC models. Dynamic pseudo-out-of-sample forecasting exercises are implemented with recursive estimations. I find that the bivariate model outperforms the univariate model, the smoothed estimates of trend and cycle components deliver smaller forecasting errors compared to the filtered estimates, and, most importantly, allowing for the ARIMA trend can lead to statistically significant gains in forecast accuracy, providing support for the ARIMA trend model. It is worthy of notice that trend shocks play the main source of the output fluctuation if the ARIMA trend is allowed in the UC model.

ESTIMATION OF NITROGEN-TO-IRON ABUNDANCE RATIOS FROM LOW-RESOLUTION SPECTRA

  • Kim, Changmin;Lee, Young Sun;Beers, Timothy C.;Masseron, Thomas
    • Journal of The Korean Astronomical Society
    • /
    • v.55 no.2
    • /
    • pp.23-36
    • /
    • 2022
  • We present a method to determine nitrogen abundance ratios with respect to iron ([N/Fe]) from molecular CN-band features observed in low-resolution (R ~ 2000) stellar spectra obtained by the Sloan Digital Sky Survey (SDSS) and the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST). Various tests are carried out to check the systematic and random errors of our technique, and the impact of signal-to-noise (S/N) ratios of stellar spectra on the determined [N/Fe]. We find that the uncertainty of our derived [N/Fe] is less than 0.3 dex for S/N ratios larger than 10 in the ranges Teff = [4000, 6000] K, log g = [0.0, 3.5], [Fe/H] = [-3.0, 0.0], [C/Fe] = [-1.0, +4.5], and [N/Fe] = [-1.0, +4.5], the parameter space that we are interested in to identify N-enhanced stars in the Galactic halo. A star-by-star comparison with a sample of stars with [N/Fe] estimates available from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) also suggests a similar level of uncertainty in our measured [N/Fe], after removing its systematic error. Based on these results, we conclude that our method is able to reproduce [N/Fe] from low-resolution spectroscopic data, with an uncertainty sufficiently small to discover N-rich stars that presumably originated from disrupted Galactic globular clusters.

Cloud Removal Using Gaussian Process Regression for Optical Image Reconstruction

  • Park, Soyeon;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.4
    • /
    • pp.327-341
    • /
    • 2022
  • Cloud removal is often required to construct time-series sets of optical images for environmental monitoring. In regression-based cloud removal, the selection of an appropriate regression model and the impact analysis of the input images significantly affect the prediction performance. This study evaluates the potential of Gaussian process (GP) regression for cloud removal and also analyzes the effects of cloud-free optical images and spectral bands on prediction performance. Unlike other machine learning-based regression models, GP regression provides uncertainty information and automatically optimizes hyperparameters. An experiment using Sentinel-2 multi-spectral images was conducted for cloud removal in the two agricultural regions. The prediction performance of GP regression was compared with that of random forest (RF) regression. Various combinations of input images and multi-spectral bands were considered for quantitative evaluations. The experimental results showed that using multi-temporal images with multi-spectral bands as inputs achieved the best prediction accuracy. Highly correlated adjacent multi-spectral bands and temporally correlated multi-temporal images resulted in an improved prediction accuracy. The prediction performance of GP regression was significantly improved in predicting the near-infrared band compared to that of RF regression. Estimating the distribution function of input data in GP regression could reflect the variations in the considered spectral band with a broader range. In particular, GP regression was superior to RF regression for reproducing structural patterns at both sites in terms of structural similarity. In addition, uncertainty information provided by GP regression showed a reasonable similarity to prediction errors for some sub-areas, indicating that uncertainty estimates may be used to measure the prediction result quality. These findings suggest that GP regression could be beneficial for cloud removal and optical image reconstruction. In addition, the impact analysis results of the input images provide guidelines for selecting optimal images for regression-based cloud removal.

Water level forecasting for extended lead times using preprocessed data with variational mode decomposition: A case study in Bangladesh

  • Shabbir Ahmed Osmani;Roya Narimani;Hoyoung Cha;Changhyun Jun;Md Asaduzzaman Sayef
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.179-179
    • /
    • 2023
  • This study suggests a new approach of water level forecasting for extended lead times using original data preprocessing with variational mode decomposition (VMD). Here, two machine learning algorithms including light gradient boosting machine (LGBM) and random forest (RF) were considered to incorporate extended lead times (i.e., 5, 10, 15, 20, 25, 30, 40, and 50 days) forecasting of water levels. At first, the original data at two water level stations (i.e., SW173 and SW269 in Bangladesh) and their decomposed data from VMD were prepared on antecedent lag times to analyze in the datasets of different lead times. Mean absolute error (MAE), root mean squared error (RMSE), and mean squared error (MSE) were used to evaluate the performance of the machine learning models in water level forecasting. As results, it represents that the errors were minimized when the decomposed datasets were considered to predict water levels, rather than the use of original data standalone. It was also noted that LGBM produced lower MAE, RMSE, and MSE values than RF, indicating better performance. For instance, at the SW173 station, LGBM outperformed RF in both decomposed and original data with MAE values of 0.511 and 1.566, compared to RF's MAE values of 0.719 and 1.644, respectively, in a 30-day lead time. The models' performance decreased with increasing lead time, as per the study findings. In summary, preprocessing original data and utilizing machine learning models with decomposed techniques have shown promising results for water level forecasting in higher lead times. It is expected that the approach of this study can assist water management authorities in taking precautionary measures based on forecasted water levels, which is crucial for sustainable water resource utilization.

  • PDF

Neoadjuvant chemoradiotherapy versus immediate surgery for resectable and borderline resectable pancreatic cancer: Meta-analysis and trial sequential analysis of randomized controlled trials

  • Shahab Hajibandeh;Shahin Hajibandeh;Christina Intrator;Karim Hassan;Mantej Sehmbhi;Jigar Shah;Eshan Mazumdar;Ambareen Kausar;Thomas Satyadas
    • Annals of Hepato-Biliary-Pancreatic Surgery
    • /
    • v.27 no.1
    • /
    • pp.28-39
    • /
    • 2023
  • We aimed to compare resection and survival outcomes of neoadjuvant chemoradiotherapy (CRT) and immediate surgery in patients with resectable pancreatic cancer (RPC) or borderline resectable pancreatic cancer (BRPC). In compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement standards, a systematic review of randomized controlled trials (RCTs) was conducted. Random effects modeling was applied to calculate pooled outcome data. Likelihood of type 1 or 2 errors in the meta-analysis model was assessed by trial sequential analysis. A total of 400 patients from four RCTs were included. When RPC and BRPC were analyzed together, neoadjuvant CRT resulted in a higher R0 resection rate (risk ratio [RR]: 1.55, p = 0.004), longer overall survival (mean difference [MD]: 3.75 years, p = 0.009) but lower overall resection rate (RR: 0.83, p = 0.008) compared with immediate surgery. When RPC and BRPC were analyzed separately, neoadjuvant CRT improved R0 resection rate (RR: 3.72, p = 0.004) and overall survival (MD: 6.64, p = 0.004) of patients with BRPC. However, it did not improve R0 resection rate (RR: 1.18, p = 0.13) or overall survival (MD: 0.94, p = 0.57) of patients with RPC. Neoadjuvant CRT might be beneficial for patients with BRPC, but not for patients with RPC. Nevertheless, the best available evidence does not include contemporary chemotherapy regimens. Patients with RPC and those with BRPC should not be combined in the same cohort in future studies.

Utilization of Skewness for Statistical Quality Control (통계적 품질관리를 위한 왜도의 활용)

  • Kim, Hoontae;Lim, Sunguk
    • Journal of Korean Society for Quality Management
    • /
    • v.51 no.4
    • /
    • pp.663-675
    • /
    • 2023
  • Purpose: Skewness is an indicator used to measure the asymmetry of data distribution. In the past, product quality was judged only by mean and variance, but in modern management and manufacturing environments, various factors and volatility must be considered. Therefore, skewness helps accurately understand the shape of data distribution and identify outliers or problems, and skewness can be utilized from this new perspective. Therefore, we would like to propose a statistical quality control method using skewness. Methods: In order to generate data with the same mean and variance but different skewness, data was generated using normal distribution and gamma distribution. Using Minitab 18, we created 20 sets of 1,000 random data of normal distribution and gamma distribution. Using this data, it was proven that the process state can be sensitively identified by using skewness. Results: As a result of the analysis of this study, if the skewness is within ± 0.2, there is no difference in judgment from management based on the probability of errors that can be made in the management state as discussed in quality control. However, if the skewness exceeds ±0.2, the control chart considering only the standard deviation determines that it is in control, but it can be seen that the data is out of control. Conclusion: By using skewness in process management, the ability to evaluate data quality is improved and the ability to detect abnormal signals is excellent. By using this, process improvement and process non-sub-stitutability issues can be quickly identified and improved.

Analysis of Automatic Rigid Image-Registration on Tomotherapy (토모테라피의 자동영상정합 분석)

  • Kim, Young-Lock;Cho, Kwang Hwan;Jung, Jae-Hong;Jung, Joo-Young;Lim, Kwang Chae;Kim, Yong Ho;Moon, Seong Kwon;Bae, Sun Hyun;Min, Chul Kee;Kim, Eun Seog;Yeo, Seung-Gu;Suh, Tae Suk;Choe, Bo-Young;Min, Jung-Whan;Ahn, Jae Ouk
    • Journal of radiological science and technology
    • /
    • v.37 no.1
    • /
    • pp.37-47
    • /
    • 2014
  • The purpose of this study was to analyze translational and rotational adjustments during automatic rigid image-registration by using different control parameters for a total of five groups on TomoTherapy (Accuray Inc, Sunnyvale, CA, USA). We selected a total of 50 patients and classified them in five groups (brain, head-and-neck, lung, abdomen and pelvic) and used a total of 500 megavoltage computed tomography (MVCT) image sets for the analysis. From this we calculated the overall mean value(M) for systematic and random errors after applying the different control parameters. After randomization of the patients into the five groups, we found that the overall mean value varied according to three techniques and resolutions. The deviation for the lung, abdomen and pelvic groups was approximately greater than the deviation for the brain and head-and-neck groups in all adjustments. Overall, using a "full-image" produces smaller deviations in the rotational adjustments. We found that rotational adjustment has deviations with distinctly different control parameters. We concluded that using a combination of the "full-image" technique and "standard" resolution will be helpful in assisting with patients' repositioning and in correcting for set-up errors prior to radiotherapy on TomoTherapy.