• Title/Summary/Keyword: bias errors

Search Result 313, Processing Time 0.033 seconds

Different penalty methods for assessing interval from first to successful insemination in Japanese Black heifers

  • Setiaji, Asep;Oikawa, Takuro
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.32 no.9
    • /
    • pp.1349-1354
    • /
    • 2019
  • Objective: The objective of this study was to determine the best approach for handling missing records of first to successful insemination (FS) in Japanese Black heifers. Methods: Of a total of 2,367 records of heifers born between 2003 and 2015 used, 206 (8.7%) of open heifers were missing. Four penalty methods based on the number of inseminations were set as follows: C1, FS average according to the number of inseminations; C2, constant number of days, 359; C3, maximum number of FS days to each insemination; and C4, average of FS at the last insemination and FS of C2. C5 was generated by adding a constant number (21 d) to the highest number of FS days in each contemporary group. The bootstrap method was used to compare among the 5 methods in terms of bias, mean squared error (MSE) and coefficient of correlation between estimated breeding value (EBV) of non-censored data and censored data. Three percentages (5%, 10%, and 15%) were investigated using the random censoring scheme. The univariate animal model was used to conduct genetic analysis. Results: Heritability of FS in non-censored data was $0.012{\pm}0.016$, slightly lower than the average estimate from the five penalty methods. C1, C2, and C3 showed lower standard errors of estimated heritability but demonstrated inconsistent results for different percentages of missing records. C4 showed moderate standard errors but more stable ones for all percentages of the missing records, whereas C5 showed the highest standard errors compared with noncensored data. The MSE in C4 heritability was $0.633{\times}10^{-4}$, $0.879{\times}10^{-4}$, $0.876{\times}10^{-4}$ and $0.866{\times}10^{-4}$ for 5%, 8.7%, 10%, and 15%, respectively, of the missing records. Thus, C4 showed the lowest and the most stable MSE of heritability; the coefficient of correlation for EBV was 0.88; 0.93 and 0.90 for heifer, sire and dam, respectively. Conclusion: C4 demonstrated the highest positive correlation with the non-censored data set and was consistent within different percentages of the missing records. We concluded that C4 was the best penalty method for missing records due to the stable value of estimated parameters and the highest coefficient of correlation.

Preprocessing of Transmitted Spectrum Data for Development of a Robust Non-destructive Sugar Prediction Model of Intact Fruits (과실의 비파괴 당도 예측 모델의 성능향상을 위한 투과스펙트럼의 전처리)

  • Noh, Sang-Ha;Ryu, Dong-Soo
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.22 no.4
    • /
    • pp.361-368
    • /
    • 2002
  • The aim of this study was to investigate the effect of preprocessing the transmitted energy spectrum data on development of a robust model to predict the sugar content in intact apples. The spectrum data were measured from 120 Fuji apple samples conveying at the speed of 2 apples per second. Computer algorithms of preprocessing methods such as MSC, SNV, first derivative, OSC and their combinations were developed and applied to the raw spectrum data set. The results indicated that correlation coefficients between the transmitted energy values at each wavelength and sugar contents of apples were significantly improved by the preprocessing of MSC and SNV in particular as compared with those of no-preprocessing. SEPs of the prediction models showed great difference depending on the preprocessing method of the raw spectrum data, the largest of 1.265%brix and the smallest of 0.507% brix. Such a result means that an appropriate preprocessing method corresponding to the characteristics of the spectrum data set should be found or developed for minimizing the prediction errors. It was observed that MSC and SNV are closely related to prediction accuracy, OSC is to number of PLS factors and the first derivative resulted in decrease of the prediction accuracy. A robust calibration model could be d3eveloped by the combined preprocessing of MSC and OSC, which showed that SEP=0.507%brix, bias=0.0327 and R2=0.8823.

Application and First Evaluation of the Operational RAMS Model for the Dispersion Forecast of Hazardous Chemicals - Validation of the Operational Wind Field Generation System in CARIS (유해화학물질 대기확산 예측을 위한 RAMS 기상모델의 적용 및 평가 - CARIS의 바람장 모델 검증)

  • Kim, C.H.;Na, J.G.;Park, C.J.;Park, J.H.;Im, C.S.;Yoon, E.;Kim, M.S.;Park, C.H.;Kim, Y.J.
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.19 no.5
    • /
    • pp.595-610
    • /
    • 2003
  • The statistical indexes such as RMSE (Root Mean Square Error), Mean Bias error, and IOA (Index of agreement) are used to evaluate 3 Dimensional wind and temperature fields predicted by operational meteorological model RAMS (Regional Atmospheric Meteorological System) implemented in CARIS (Chemical Accident Response Information System) for the dispersion forecast of hazardous chemicals in case of the chemical accidents in Korea. The operational atmospheric model, RAMS in CARIS are designed to use GDAPS, GTS, and AWS meteorological data obtained from KMA (Korean Meteorological Administration) for the generation of 3-dimensional initial meteorological fields. The predicted meteorological variables such as wind speed, wind direction, temperature, and precipitation amount, during 19 ∼ 23, August 2002, are extracted at the nearest grid point to the meteorological monitoring sites, and validated against the observations located over the Korean peninsula. The results show that Mean bias and Root Mean Square Error are 0.9 (m/s), 1.85 (m/s) for wind speed at 10 m above the ground, respectively, and 1.45 ($^{\circ}C$), 2.82 ($^{\circ}C$) for surface temperature. Of particular interest is the distribution of forecasting error predicted by RAMS with respect to the altitude; relatively smaller error is found in the near-surface atmosphere for wind and temperature fields, while it grows larger as the altitude increases. Overall, some of the overpredictions in comparisons with the observations are detected for wind and temperature fields, whereas relatively small errors are found in the near-surface atmosphere. This discrepancies are partly attributed to the oversimplified spacing of soil, soil contents and initial temperature fields, suggesting some improvement could probably be gained if the sub-grid scale nature of moisture and temperature fields was taken into account. However, IOA values for the wind field (0.62) as well as temperature field (0.78) is greater than the 'good' value criteria (> 0.5) implied by other studies. The good value of IOA along with relatively small wind field error in the near surface atmosphere implies that, on the basis of current meteorological data for initial fields, RAMS has good potentials to be used as a operational meteorological model in predicting the urban or local scale 3-dimensional wind fields for the dispersion forecast in association with hazardous chemical releases in Korea.

A Study on HAUSAT-2 Momentum Wheel Start-up Method (초소형위성 HAUSAT-2 모멘텀 휠 Start-up 방안 연구)

  • Lee, Byung-Hoon;Kim, Soo-Jung;Chang, Young-Keun
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.33 no.9
    • /
    • pp.73-80
    • /
    • 2005
  • This paper addresses a newly proposed start-up method of the HAUSAT-2 momentum wheel. The HAUSAT-2 is a 25kg class nanosatellite which is stabilized to earth pointing by 3-axis active control method. A momentum wheel performs two functions. It provides a pitch-axis momentum bias while measuring satellite pitch and roll attitude. Pitch control is accomplished in the conventional way by driving a momentum wheel in response to pitch attitude errors. Precession control and nutation damping are provided by driving the pitch axis magnetic torquer. A momentum wheel is nominally spinning at a particular rate and changes speed. This simulation study investigates the feasibility and performance of a proposed strategy for starting-up the wheel. A proposed strategy to start-up the wheel shows that a pitch momentum wheel can be successfully started-up to its nominal speed from rest and be stabilized to nadir pointing.

Orbit Determination of High-Earth-Orbit Satellites by Satellite Laser Ranging

  • Oh, Hyungjik;Park, Eunseo;Lim, Hyung-Chul;Lee, Sang-Ryool;Choi, Jae-Dong;Park, Chandeok
    • Journal of Astronomy and Space Sciences
    • /
    • v.34 no.4
    • /
    • pp.271-280
    • /
    • 2017
  • This study presents the application of satellite laser ranging (SLR) to orbit determination (OD) of high-Earth-orbit (HEO) satellites. Two HEO satellites are considered: the Quasi-Zenith Satellite-1 (QZS-1), a Japanese elliptical-inclinedgeosynchronous-orbit (EIGSO) satellite, and the Compass-G1, a Chinese geostationary-orbit (GEO) satellite. One week of normal point (NP) data were collected for each satellite to perform the OD based on the batch least-square process. Five SLR tracking stations successfully obtained 374 NPs for QZS-1 in eight days, whereas only two ground tracking stations could track Compass-G1, yielding 68 NPs in ten days. Two types of station bias estimation and a station data weighting strategy were utilized for the OD of QZS-1. The post-fit root-mean-square (RMS) residuals of the two week-long arcs were 11.98 cm and 10.77 cm when estimating the biases once in an arc (MBIAS). These residuals were decreased significantly to 2.40 cm and 3.60 cm by estimating the biases every pass (PBIAS). Then, the resultant OD precision was evaluated by the orbit overlap method, yielding three-dimensional errors of 55.013 m with MBIAS and 1.962 m with PBIAS for the overlap period of six days. For the OD of Compass-G1, no station weighting strategy was applied, and only MBIAS was utilized due to the lack of NPs. The post-fit RMS residuals of OD were 8.81 cm and 12.00 cm with 49 NPs and 47 NPs, respectively, and the corresponding threedimensional orbit overlap error for four days was 160.564 m. These results indicate that the amount of SLR tracking data is critical for obtaining precise OD of HEO satellites using SLR because additional parameters, such as station bias, are available for estimation with sufficient tracking data. Furthermore, the stand-alone SLR-based orbit solution is consistently attainable for HEO satellites if a target satellite is continuously trackable for a specific period.

An Empirical Study of the Recovery Experiment in Clinical Chemistry (임상화학검사실에서 회수율 실험의 실증적 연구)

  • Chang, Sang-Wu;Lee, Sang-Gon;Song, Eun-Young;Park, Yong-Won;Park, Byong-Ok
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.38 no.3
    • /
    • pp.184-188
    • /
    • 2006
  • The purpose of the recovery experiment in clinical chemistry is performed to estimate proportional systematic error. We must know all measurements have some error margin in measuring analytical performance. Proportional systematic error is the type of error whose magnitude increases as the concentration of analyte increases. This error is often caused by a substance in the sample matrix that reacts with the sought for analyte and therefore competes with the analytical reagent. Recovery experiments, therefore, are used rather selectively and do not have a high priority when another analytical method is available for comparison purposes. They may still be useful to help understand the nature of any bias revealed in the comparison of kit experiments. Recovery should be expressed as a percentage because the experimental objective is to estimate proportional systematic error, which is a percentage type of error. Good recovery is 100.0%. The difference between 100 and the observed recovery(in percent) is the proportional systematic error. We calculated the amount of analyte added by multiplying the concentration of the analyte added solution by the dilution factor(mL standard)/(mL standard + mL specimen) and took the difference between the sample with addition and the sample with dilution. When making judgments on method performance, the observed that the errors should be compared to the defined allowable error. The average recovery needs to be converted to proportional error(100%/Recovery) and then compared to an analytical quality requirement expressed in percent. The results of recovery experiments were total protein(101.4%), albumin(97.4%), total bilirubin(104%), alkaline phosphatase(89.1%), aspartate aminotransferase(102.8), alanine aminotransferase(103.2), gamma glutamyl transpeptidase(97.6%), creatine kinase(105.4%), lactate dehydrogenase(95.9%), creatinine(103.1%), blood urea nitrogen(102.9%), uric acid(106.4%), total cholesterol(108.5), triglycerides(89.6%), glucose(93%), amylase(109.8), calcium(102.8), inorganic phosphorus(106.3%). We then compared the observed error to the amount of error allowable for the test. There were no items beyond the CLIA criterion for acceptable performance.

  • PDF

The Behavior Economics in Storytelling (이야기하기의 행동경제학)

  • Kim, Kyung-Seop;Kim, Jeong-Lae
    • The Journal of the Convergence on Culture Technology
    • /
    • v.5 no.4
    • /
    • pp.329-337
    • /
    • 2019
  • It is true that many tales delivered in an 'Story-telling' auditorium or theater have not so much exquisite and refined forms as distorted and deteriorated ones. Furthermore, when false interpretations of tale-performers added into the category of the texts of tales, the problems can be made worse. In case of oral folk tales, there can be discordance between the standpoint of a tale-performer and the contents of a tale. This thesis is directly aimed at pointing out the 'Behavior Economics' problems concerned with the reading and interpretation of tales through investigating the missing parts of a text in reading tales. Man's rationality is meant to be confined to bounded rationality. Instead of making best choices, bounded rationality leads consumers to make a decision which they think suffices themselves to the point requiring no more consideration on the given item. It is the very Heuristic that does work in the process of this simplified decision making process. Heuristic utilizes established empirical notion and specific information, and that's why there can be cognitive 'Biases' sometimes leading to inaccurate judgment. As Oral Literature is basically based on heavy guesswork and perceptual biases of general public, it is imperative to contemplate oral literature in the framework of Heuristic of behavior economics. This thesis deals with thinking types and behavioral patterns of the general public in the perspective of heuristic by examining 'Story-tellings' on the basis of personal or public memory. In addition, heuristic involves how to deal with significant but intangible content such as the errors of oral story teller, the deviations of the story, and responses of the audience.

Automated Measurement of Native T1 and Extracellular Volume Fraction in Cardiac Magnetic Resonance Imaging Using a Commercially Available Deep Learning Algorithm

  • Suyon Chang;Kyunghwa Han;Suji Lee;Young Joong Yang;Pan Ki Kim;Byoung Wook Choi;Young Joo Suh
    • Korean Journal of Radiology
    • /
    • v.23 no.12
    • /
    • pp.1251-1259
    • /
    • 2022
  • Objective: T1 mapping provides valuable information regarding cardiomyopathies. Manual drawing is time consuming and prone to subjective errors. Therefore, this study aimed to test a DL algorithm for the automated measurement of native T1 and extracellular volume (ECV) fractions in cardiac magnetic resonance (CMR) imaging with a temporally separated dataset. Materials and Methods: CMR images obtained for 95 participants (mean age ± standard deviation, 54.5 ± 15.2 years), including 36 left ventricular hypertrophy (12 hypertrophic cardiomyopathy, 12 Fabry disease, and 12 amyloidosis), 32 dilated cardiomyopathy, and 27 healthy volunteers, were included. A commercial deep learning (DL) algorithm based on 2D U-net (Myomics-T1 software, version 1.0.0) was used for the automated analysis of T1 maps. Four radiologists, as study readers, performed manual analysis. The reference standard was the consensus result of the manual analysis by two additional expert readers. The segmentation performance of the DL algorithm and the correlation and agreement between the automated measurement and the reference standard were assessed. Interobserver agreement among the four radiologists was analyzed. Results: DL successfully segmented the myocardium in 99.3% of slices in the native T1 map and 89.8% of slices in the post-T1 map with Dice similarity coefficients of 0.86 ± 0.05 and 0.74 ± 0.17, respectively. Native T1 and ECV showed strong correlation and agreement between DL and the reference: for T1, r = 0.967 (95% confidence interval [CI], 0.951-0.978) and bias of 9.5 msec (95% limits of agreement [LOA], -23.6-42.6 msec); for ECV, r = 0.987 (95% CI, 0.980-0.991) and bias of 0.7% (95% LOA, -2.8%-4.2%) on per-subject basis. Agreements between DL and each of the four radiologists were excellent (intraclass correlation coefficient [ICC] of 0.98-0.99 for both native T1 and ECV), comparable to the pairwise agreement between the radiologists (ICC of 0.97-1.00 and 0.99-1.00 for native T1 and ECV, respectively). Conclusion: The DL algorithm allowed automated T1 and ECV measurements comparable to those of radiologists.

Retrieval and Validation of Precipitable Water Vapor using GPS Datasets of Mobile Observation Vehicle on the Eastern Coast of Korea

  • Kim, Yoo-Jun;Kim, Seon-Jeong;Kim, Geon-Tae;Choi, Byoung-Choel;Shim, Jae-Kwan;Kim, Byung-Gon
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.4
    • /
    • pp.365-382
    • /
    • 2016
  • The results from the Global Positioning System (GPS) measurements of the Mobile Observation Vehicle (MOVE) on the eastern coast of Korea have been compared with REFerence (REF) values from the fixed GPS sites to assess the performance of Precipitable Water Vapor (PWV) retrievals in a kinematic environment. MOVE-PWV retrievals had comparatively similar trends and fairly good agreements with REF-PWV with a Root-Mean-Square Error (RMSE) of 7.4 mm and $R^2$ of 0.61, indicating statistical significance with a p-value of 0.01. PWV retrievals from the June cases showed better agreement than those of the other month cases, with a mean bias of 2.1 mm and RMSE of 3.8 mm. We further investigated the relationships of the determinant factors of GPS signals with the PWV retrievals for detailed error analysis. As a result, both MultiPath (MP) errors of L1 and L2 pseudo-range had the best indices for the June cases, 0.75-0.99 m. We also found that both Position Dilution Of Precision (PDOP) and Signal to Noise Ratio (SNR) values in the June cases were better than those in other cases. That is, the analytical results of the key factors such as MP errors, PDOP, and SNR that can affect GPS signals should be considered for obtaining more stable performance. The data of MOVE can be used to provide water vapor information with high spatial and temporal resolutions in the case of dramatic changes of severe weather such as those frequently occurring in the Korean Peninsula.

DETECTION AND MASKING OF CLOUD CONTAMINATION IN HIGH-RESOLUTION SST IMAGERY: A PRACTICAL AND EFFECTIVE METHOD FOR AUTOMATION

  • Hu, Chuanmin;Muller-Karger, Frank;Murch, Brock;Myhre, Douglas;Taylor, Judd;Luerssen, Remy;Moses, Christopher;Zhang, Caiyun
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.1011-1014
    • /
    • 2006
  • Coarse resolution (9 - 50 km pixels) Sea Surface Temperature satellite data are frequently considered adequate for open ocean research. However, coastal regions, including coral reef, estuarine and mesoscale upwelling regions require high-resolution (1-km pixel) SST data. The AVHRR SST data often suffer from navigation errors of several kilometres and still require manual navigation adjustments. The second serious problem is faulty and ineffective cloud-detection algorithms used operationally; many of these are based on radiance thresholds and moving window tests. With these methods, increasing sensitivity leads to masking of valid pixels. These errors lead to significant cold pixel biases and hamper image compositing, anomaly detection, and time-series analysis. Here, after manual navigation of over 40,000 AVHRR images, we implemented a new cloud filter that differs from other published methods. The filter first compares a pixel value with a climatological value built from the historical database, and then tests it against a time-based median value derived for that pixel from all satellite passes collected within ${\pm}3$ days. If the difference is larger than a predefined threshold, the pixel is flagged as cloud. We tested the method and compared to in situ SST from several shallow water buoys in the Florida Keys. Cloud statistics from all satellite sensors (AVHRR, MODIS) shows that a climatology filter with a $4^{\circ}C$ threshold and a median filter threshold of $2^{\circ}C$ are effective and accurate to filter clouds without masking good data. RMS difference between concurrent in situ and satellite SST data for the shallow waters (< 10 m bottom depth) is < $1^{\circ}C$, with only a small bias. The filter has been applied to the entire series of high-resolution SST data since1993 (including MODIS SST data since 2003), and a climatology is constructed to serve as the baseline to detect anomaly events.

  • PDF