• 제목/요약/키워드: Error Criteria

Search Result 582, Processing Time 0.031 seconds

Field Performance Evaluation of Candidate Samplers for National Reference Method for PM2.5 (PM2.5 국가기준측정장비 선정을 위한 비교 측정 연구)

  • Lee, Yong Hwan;Park, Jin Su;Oh, Jun;Choi, Jin Soo;Kim, Hyun Jae;Ahn, Joon Young;Hong, You Deog;Hong, Ji Hyung;Han, Jin Seok;Lee, Gangwoong
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.31 no.2
    • /
    • pp.157-163
    • /
    • 2015
  • To establish National Reference Method (NRM) for $PM_{2.5}$, operational performance of 5 different commercial gravimetric-based $PM_{2.5}$ measuring instruments was assessed at Bulkwang monitoring station from January 23, 2014 to February 28, 2014. First, physical properties, design, and functional performance of the instruments were assessed. Evaluation was carried out to determine whether operating method for the instruments and levels of QA/QC activities meet the data quality objectives (DQOs). To verify whether DQOs were satisfied, reproducibility of QA/QC procedures, accuracy, relative sensitivity, limit of detection, margin of error, and coefficient of determination of the instruments were also evaluated. Results of flow rate measurement of 15 candidate instruments indicated that all the instruments met performance criteria with accuracy deviation of 4.0% and reproducibility of 0.6%. Comparison of final $PM_{2.5}$ mass concentrations showed that the coefficient of determination ($R^2$) values were greater than or equal to 0.9995, and concentration gradient ranged from 0.97 to 1.03. All the instruments satisfied criteria for NRM with the estimated precision of 1.47~2.60%, accuracy of -1.90~3.00%, and absolute accuracy of 1.02~3.12%. This study found that one particular type of measuring instrument was proved to be excellent, with overall evaluation criteria satisfied.

Determination of Parameters for the Clark Model based on Observed Hydrological Data (실측수문자료에 의한 Clark 모형의 매개변수 결정)

  • Ahn, Tae Jin;Jeon, Hyun Chul;Kim, Min Hyeok
    • Journal of Wetlands Research
    • /
    • v.18 no.2
    • /
    • pp.121-131
    • /
    • 2016
  • The determination of feasible design flood is the most important to control flood damage in river management. Concentration time and storage constant in the Clark unit hydrograph method mainly affects magnitude of peak flood and shape of hydrograph. Model parameters should be calibrated using observed discharge but due to deficiency of observed data the parameters have been adopted by empirical formula. This study is to suggest concentration time and storage constant based on the observed rainfall-runoff data at GongDo stage station in the Ansung river basin. To do this, five criteria have been suggested to compute root mean square error(RMSE) and residual of oserved value and computed one. Once concentration time and storage constant have been determined from three rainfall-runoff event selected at the station, the five criteria based on observed hydrograph and computed hydrograph by the Clark model have been computed to determine the value of concentration time and storage constant. A criteria has been proposed to determine concentration time and storage constant based on the results of the observed hydrograph and the Clark model. It has also been shown that an exponent value of concentration time-cumulative area curve should be determined based on the shape of watershed.

Automated Areal Feature Matching in Different Spatial Data-sets (이종의 공간 데이터 셋의 면 객체 자동 매칭 방법)

  • Kim, Ji Young;Lee, Jae Bin
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.24 no.1
    • /
    • pp.89-98
    • /
    • 2016
  • In this paper, we proposed an automated areal feature matching method based on geometric similarity without user intervention and is applied into areal features of many-to-many relation, for confusion of spatial data-sets of different scale and updating cycle. Firstly, areal feature(node) that a value of inclusion function is more than 0.4 was connected as an edge in adjacency matrix and candidate corresponding areal features included many-to-many relation was identified by multiplication of adjacency matrix. For geometrical matching, these multiple candidates corresponding areal features were transformed into an aggregated polygon as a convex hull generated by a curve-fitting algorithm. Secondly, we defined matching criteria to measure geometrical quality, and these criteria were changed into normalized values, similarity, by similarity function. Next, shape similarity is defined as a weighted linear combination of these similarities and weights which are calculated by Criteria Importance Through Intercriteria Correlation(CRITIC) method. Finally, in training data, we identified Equal Error Rate(EER) which is trade-off value in a plot of precision versus recall for all threshold values(PR curve) as a threshold and decided if these candidate pairs are corresponding pairs or not. To the result of applying the proposed method in a digital topographic map and a base map of address system(KAIS), we confirmed that some many-to-many areal features were mis-detected in visual evaluation and precision, recall and F-Measure was highly 0.951, 0.906, 0.928, respectively in statistical evaluation. These means that accuracy of the automated matching between different spatial data-sets by the proposed method is highly. However, we should do a research on an inclusion function and a detail matching criterion to exactly quantify many-to-many areal features in future.

Quality Assurance of Leaf Speed for Dynamic Multileaf Collimator (MLC) Using Dynalog Files (Dynalog file을 이용한 동적다엽조준기의 Leaf 속도 정도관리 평가)

  • Kim, Joo Seob;Ahn, Woo Sang;Lee, Woo Suk;Park, Sung Ho;Choi, Wonsik;Shin, Seong Soo
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.305-312
    • /
    • 2014
  • Purpose : The purpose of this study is to analyze the mechanical and leaf speed accuracy of the dynamic multileaf collimator (DMLC) and determine the appropriate period of quality assurance (QA). Materials and Methods : The quality assurance of the DMLC equipped with Millennium 120 leaves has been performed total 92 times from January 2012 to June 2014. The the accuracy of leaf position and isocenter coincidence for MLC were checked using the graph paper and Gafchromic EBT film, respectively. The stability of leaf speed was verified using a test file requiring the leaves to reach maximum leaf speed during the gantry rotation. At the end of every leaf speed QA, dynamic dynalog files created by MLC controller were analyzed using dynalog file viewer software. This file concludes the information about the planned versus actual position for all leaves and provides error RMS (root-mean square) for individual leaf deviations and error histogram for all leaf deviations. In this study, the data obtained from the leaf speed QA were used to screen the performance degradation of leaf speed and determine the need for motor replacement. Results : The leaf position accuracy and isocenteric coincidence of MLC was observed within a tolerance range recommanded from TG-142 reports. Total number of motor replacement were 56 motors over whole QA period. For all motors replaced from QA, gradually increased patterns of error RMS values were much more than suddenly increased patterns of error RMS values. Average error RMS values of gradually and suddenly increased patterns were 0.298 cm and 0.273 cm, respectively. However, The average error RMS values were within 0.35 cm recommended by the vendor, motors were replaced according to the criteria of no counts with misplacement > 1 cm. On average, motor replacement for gradually increased patterns of error RMS values 22 days. 28 motors were replaced regardless of the leaf speed QA. Conclusion : This study performed the periodic MLC QA for analyzing the mechanical and leaf speed accuracy of the dynamic multileaf collimator (DMLC). The leaf position accuracy and isocenteric coincidence showed whthin of MLC evaluation is observed within the tolerance value recommanded by TG-142 report. Based on the result obtained from leaf speed QA, we have concluded that QA protocol of leaf speed for DMLC was performed at least bimonthly in order to screen the performance of leaf speed. The periodic QA protocol can help to ensure for delivering accurate IMRT treatment to patients maintaining the performance of leaf speed.

Reagentless Determination of Human Serum Components Using Infrared Absorption Spectroscopy

  • Hahn, Sang-Joon;Yoon, Gil-Won;Kim Gun-Shik;Park Seung-Han
    • Journal of the Optical Society of Korea
    • /
    • v.7 no.4
    • /
    • pp.240-244
    • /
    • 2003
  • Simultaneous determination of concentrations for four major components in human blood serum was investigated using a Fourier-transform mid-infrared spectroscopy. Infrared spectra of human blood serum were measured in 8.404 ∼ 10.25 ${\mu}m$ range where the highest absorption peaks of glucose are located. A partial least square (PLS) algorithm was utilized to establish a calibration model for determining total protein, albumin, globulin and glucose levels which are commonly measured metabolites. The standard error of cross validation obtained from our multivariate calibration model was 0.24 g/dL for total protein, 0.15 g/dL for albumin, 0.17 g/dL for globulin, and 6.68 mg/dL for glucose, which are comparable with or meet the criteria for clinical use. The results indicate that the infrared absorption spectroscopy can be used to predict the concentrations of clinically important metabolites without going through a chemical process with a reagent.

A Systematic Method of Hinting Interface Design (체계적인 힌팅 인터페이스 설계 방법의 연구)

  • Lee, Eun-A;Yun, Wan-Cheol;Park, Wan-Su
    • Journal of the Ergonomics Society of Korea
    • /
    • v.25 no.2
    • /
    • pp.125-134
    • /
    • 2006
  • Most users learn new, complex systems through trial-and-error experience rather than referring to the manuals in a cognitive process that is called 'exploratory learning'. While exploring a system, people find prototypical rules for using the system based especially on frequent tasks. The rules are formed from consistent task procedures and well-expected interface elements on the designed system. These rules play the role of the basis of users' knowledge for performing tasks. The decision making to select and apply those rules interacting with an interface can be aided by properly provided hints on the interface. With appropriate hints, users can learn new systems easily and use them with reduced usability problems. This paper first reports an observation of user behavior performing tasks with prototypical interaction rules and finds a sound set of criteria to extract prototypical interaction rules systematically. Two types of hints are defined. Extending hints prompt users to apply prototypical interaction rules beyond well-known tasks. Preventive hints guide users out of possible capture errors by drawing attention to the variation of rules. A systematic and practical method is proposed to identify the opportunities for both types in designing interfaces. It is then verified through a usability test that the proposed method is effective in identifying the locations and types of appropriate hints to reduce or mitigate usability problems.

Determination of coronal electron density distributions by DH type II radio bursts and CME observations

  • Lee, Jae-Ok;Moon, Yong-Jae;Lee, Jin-Yi;Lee, Kyoung-Sun;Kim, Rok-Soon
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.40 no.1
    • /
    • pp.63.1-63.1
    • /
    • 2015
  • In this study, we determine coronal electron density distributions by analyzing DH type II radio observations based on the assumption: a DH type II radio burst is generated by the shock formed at a CME leading edge. For this, we consider 11 Wind/WAVES DH type II radio bursts (from 2000 to 2003 and from 2010 to 2012) associated with SOHO/LASCO limb CMEs using the following criteria: (1) the fundamental and second harmonic emission lanes are well identified; (2) its associated CME is clearly identified in the LASCO-C2 or C3 field of view at the time of type II observation. For these events, we determine the lowest frequencies of their fundamental emission lanes and the heights of their leading edges. Coronal electron density distributions are obtained by minimizing the root mean square error between the observed heights of CME leading edges and the heights of DH type II radio bursts from assumed electron density distributions. We find that the estimated coronal electron density distribution ranges from 2.5 to 10.2-fold Saito's coronal electron density models.

  • PDF

ON THE LINEARIZATION OF DEFECT-CORRECTION METHOD FOR THE STEADY NAVIER-STOKES EQUATIONS

  • Shang, Yueqiang;Kim, Do Wan;Jo, Tae-Chang
    • Journal of the Korean Mathematical Society
    • /
    • v.50 no.5
    • /
    • pp.1129-1163
    • /
    • 2013
  • Based on finite element discretization, two linearization approaches to the defect-correction method for the steady incompressible Navier-Stokes equations are discussed and investigated. By applying $m$ times of Newton and Picard iterations to solve an artificial viscosity stabilized nonlinear Navier-Stokes problem, respectively, and then correcting the solution by solving a linear problem, two linearized defect-correction algorithms are proposed and analyzed. Error estimates with respect to the mesh size $h$, the kinematic viscosity ${\nu}$, the stability factor ${\alpha}$ and the number of nonlinear iterations $m$ for the discrete solution are derived for the linearized one-step defect-correction algorithms. Efficient stopping criteria for the nonlinear iterations are derived. The influence of the linearizations on the accuracy of the approximate solutions are also investigated. Finally, numerical experiments on a problem with known analytical solution, the lid-driven cavity flow, and the flow over a backward-facing step are performed to verify the theoretical results and demonstrate the effectiveness of the proposed defect-correction algorithms.

A Study on Performance-Based Design Enforcement (성능위주설계 시행의 개선방안)

  • Lee, Yang-Ju;Ko, Kyoung-Chan;Park, Woe-Chul
    • Fire Science and Engineering
    • /
    • v.26 no.1
    • /
    • pp.68-73
    • /
    • 2012
  • Performance-based design (PBD) for large scale high rise buildings has been enforced to secure fire and evacuation safety since July 1, 2011. As various types of trial and error were expected in the early stage, to suggest solutions to the problems that might be followed by the enforcement, the regulations on PBD were reviewed and a questionnaire survey to fire protection specialists was carried out. It was confirmed that PBD is required for large scale apartment buildings, and specific and detail criteria for PBD methodology and evaluations, education for PBD to personnel who design and evaluate are also in need.

Comparison of Estimation Methods in NONMEM 7.2: Application to a Real Clinical Trial Dataset (실제 임상 데이터를 이용한 NONMEM 7.2에 도입된 추정법 비교 연구)

  • Yun, Hwi-Yeol;Chae, Jung-Woo;Kwon, Kwang-Il
    • Korean Journal of Clinical Pharmacy
    • /
    • v.23 no.2
    • /
    • pp.137-141
    • /
    • 2013
  • Purpose: This study compared the performance of new NONMEM estimation methods using a population analysis dataset collected from a clinical study that consisted of 40 individuals and 567 observations after a single oral dose of glimepiride. Method: The NONMEM 7.2 estimation methods tested were first-order conditional estimation with interaction (FOCEI), importance sampling (IMP), importance sampling assisted by mode a posteriori (IMPMAP), iterative two stage (ITS), stochastic approximation expectation-maximization (SAEM), and Markov chain Monte Carlo Bayesian (BAYES) using a two-compartment open model. Results: The parameters estimated by IMP, IMPMAP, ITS, SAEM, and BAYES were similar to those estimated using FOCEI, and the objective function value (OFV) for diagnosing the model criteria was significantly decreased in FOCEI, IMPMAP, SAEM, and BAYES in comparison with IMP. Parameter precision in terms of the estimated standard error was estimated precisely with FOCEI, IMP, IMPMAP, and BAYES. The run time for the model analysis was shortest with BAYES. Conclusion: In conclusion, the new estimation methods in NONMEM 7.2 performed similarly in terms of parameter estimation, but the results in terms of parameter precision and model run times using BAYES were most suitable for analyzing this dataset.