• Title/Summary/Keyword: Accuracy Standards

Search Result 486, Processing Time 0.022 seconds

A Study of Six Sigma and Total Error Allowable in Chematology Laboratory (6 시그마와 총 오차 허용범위의 개발에 대한 연구)

  • Chang, Sang-Wu;Kim, Nam-Yong;Choi, Ho-Sung;Kim, Yong-Whan;Chu, Kyung-Bok;Jung, Hae-Jin;Park, Byong-Ok
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.37 no.2
    • /
    • pp.65-70
    • /
    • 2005
  • Those specifications of the CLIA analytical tolerance limits are consistent with the performance goals in Six Sigma Quality Management. Six sigma analysis determines performance quality from bias and precision statistics. It also shows if the method meets the criteria for the six sigma performance. Performance standards calculates allowable total error from several different criteria. Six sigma means six standard deviations from the target value or mean value and about 3.4 failures per million opportunities for failure. Sigma Quality Level is an indicator of process centering and process variation total error allowable. Tolerance specification is replaced by a Total Error specification, which is a common form of a quality specification for a laboratory test. The CLIA criteria for acceptable performance in proficiency testing events are given in the form of an allowable total error, TEa. Thus there is a published list of TEa specifications for regulated analytes. In terms of TEa, Six Sigma Quality Management sets a precision goal of TEa/6 and an accuracy goal of 1.5 (TEa/6). This concept is based on the proficiency testing specification of target value +/-3s, TEa from reference intervals, biological variation, and peer group median mean surveys. We have found rules to calculate as a fraction of a reference interval and peer group median mean surveys. We studied to develop total error allowable from peer group survey results and CLIA 88 rules in US on 19 items TP, ALB, T.B, ALP, AST, ALT, CL, LD, K, Na, CRE, BUN, T.C, GLU, GGT, CA, phosphorus, UA, TG tests in chematology were follows. Sigma level versus TEa from peer group median mean CV of each item by group mean were assessed by process performance, fitting within six sigma tolerance limits were TP ($6.1{\delta}$/9.3%), ALB ($6.9{\delta}$/11.3%), T.B ($3.4{\delta}$/25.6%), ALP ($6.8{\delta}$/31.5%), AST ($4.5{\delta}$/16.8%), ALT ($1.6{\delta}$/19.3%), CL ($4.6{\delta}$/8.4%), LD ($11.5{\delta}$/20.07%), K ($2.5{\delta}$/0.39mmol/L), Na ($3.6{\delta}$/6.87mmol/L), CRE ($9.9{\delta}$/21.8%), BUN ($4.3{\delta}$/13.3%), UA ($5.9{\delta}$/11.5%), T.C ($2.2{\delta}$/10.7%), GLU ($4.8{\delta}$/10.2%), GGT ($7.5{\delta}$/27.3%), CA ($5.5{\delta}$/0.87mmol/L), IP ($8.5{\delta}$/13.17%), TG ($9.6{\delta}$/17.7%). Peer group survey median CV in Korean External Assessment greater than CLIA criteria were CL (8.45%/5%), BUN (13.3%/9%), CRE (21.8%/15%), T.B (25.6%/20%), and Na (6.87mmol/L/4mmol/L). Peer group survey median CV less than it were as TP (9.3%/10%), AST (16.8%/20%), ALT (19.3%/20%), K (0.39mmol/L/0.5mmol/L), UA (11.5%/17%), Ca (0.87mg/dL1mg/L), TG (17.7%/25%). TEa in 17 items were same one in 14 items with 82.35%. We found out the truth on increasing sigma level due to increased total error allowable, and were sure that the goal of setting total error allowable would affect the evaluation of sigma metrics in the process, if sustaining the same process.

  • PDF

The Method of Selecting Landscape Control Points for Landscape Impact Review of Development Projects (개발사업의 경관영향 검토를 위한 주요 조망점 선정 방법에 관한 연구)

  • Shin, Ji-Hoon;Shin, Min-Ji;Choi, Won-Bin
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.46 no.1
    • /
    • pp.143-155
    • /
    • 2018
  • The Natural Landscape Rating System was introduced in the amendment of the NATURAL ENVIRONMENT CONSERVATION ACT in 2006. For landscape preservation, the system aims to consider the effects of development projects or plans implemented in a natural landscape on skylines, scenic resources, and view corridors. Currently, a lack of consistency in standards for determining Landscape Control Points (LCP) to assess landscape impact lowers the accuracy and reliability of the assessment results. As the perception of and the impact on a landscape varies, depending on the location of the LCP, it is necessary to establish a reasonable set of criteria to select viewpoints and avoid unreliability in the assessment due to unclear criteria. The intent of this study is to propose an objective and reasonable set of criteria for LCP selection to effectively measure the impact on the landscape from development projects that anticipate a change in the landscape and, ultimately, to suggest basic analysis methods to assess the landscape impact of development projects and to monitor the landscape in the future. Among the development projects affecting natural landscapes, as reported in the statement of the environmental impact assessment, cases of construction of a single building or other small-scale development projects were studied. Four spot development projects were analyzed in depth for their landscape impacts, in order to make recommendations for the LCP selection procedure, which aims to widen the scope of selection according to the direction of viewpoints from the target site. The existing results of analysis based on LCP have limitations because they failed to cover the viewshed of the target buildings when there are topographical changes in the surroundings. As a solution to this problem, a new viewshed analysis method has been proposed, with a focus on the development site and target buildings, rather than viewpoints, as used in past analysis.

A Study on the Availability of Spatial and Statistical Data for Assessing CO2 Absorption Rate in Forests - A Case Study on Ansan-si - (산림의 CO2 흡수량 평가를 위한 통계 및 공간자료의 활용성 검토 - 안산시를 대상으로 -)

  • Kim, Sunghoon;Kim, Ilkwon;Jun, Baysok;Kwon, Hyuksoo
    • Journal of Environmental Impact Assessment
    • /
    • v.27 no.2
    • /
    • pp.124-138
    • /
    • 2018
  • This research was conducted to examine the availability of spatial data for assessing absorption rates of $CO_2$ in the forest of Ansan-si and evaluate the validity of methods that analyze $CO_2$ absorption. To statistically assess the $CO_2$ absorption rates per year, the 1:5,000 Digital Forest-Map (Lim5000) and Standard Carbon Removal of Major Forest Species (SCRMF) methods were employed. Furthermore, Land Cover Map (LCM) was also used to verify $CO_2$ absorption rate availability per year. Great variations in $CO_2$ absorption rates occurred before and after the year 2010. This was due to improvement in precision and accuracy of the Forest Basic Statistics (FBS) in 2010, which resulted in rapid increase in growing stock. Thus, calibration of data prior to 2010 is necessary, based on recent FBS standards. Previous studies that employed Lim5000 and FBS (2015, 2010) did not take into account the $CO_2$ absorption rates of different tree species, and the combination of SCRMF and Lim5000 resulted in $CO_2$ absorption of 42,369 ton. In contrast to the combination of SCRMF and Lim5000, LCM and SCRMF resulted in $CO_2$ absorption of 40,696 ton. Homoscedasticity tests for Lim5000 and LCM resulted in p-value <0.01, with a difference in $CO_2$ absorption of 1,673 ton. Given that $CO_2$ absorption in forests is an important factor that reduces greenhouse gas emissions, the findings of this study should provide fundamental information for supporting a wide range of decision-making processes for land use and management.

Establishment of Biotin Analysis by LC-MS/MS Method in Infant Milk Formulas (LC-MS/MS를 이용한 조제유류 중 비오틴 함량 분석법 연구)

  • Shin, Yong Woon;Lee, Hwa Jung;Ham, Hyeon Suk;Shin, Sung Cheol;Kang, Yoon Jung;Hwang, Kyung Mi;Kwon, Yong Kwan;Seo, Il Won;Oh, Jae Myoung;Koo, Yong Eui
    • Journal of Food Hygiene and Safety
    • /
    • v.31 no.5
    • /
    • pp.327-334
    • /
    • 2016
  • This study was conducted to establish the standard method for the contents of biotin in milk formulas. To optimize the method, we compared several conditions for liquid extraction, purification and instrumental measurement using spiked samples and certified reference material (NIST SRM 1849a) as test materials. LC-MS/MS method for biotin was established using $C_{18}$ column and binary gradient 0.1% formic acid/acetonitrile, 0.1% formic acid/water mobile phase is applied for biotin. Product-ion traces at m/z 245.1 ${\rightarrow}$ 227.1, 166.1 are used for quantitative analysis of biotin. The linearity was over $R^2=0.999$ in range of $5{\sim}60{\mu}g/L$. For purification, chloroform was used as a solvent for eliminating lipids in milk formula. The linearity was over 0.999 in range of 5~60 ng/mL. The detection limit and quantification limit were 0.10, 0.31 ng/mL. The accuracy and precision of LC-MS/MS method using CRM were 103%, 2.5% respectively. Optimized methods were applied in sample analysis to verify the reliability. All the tested milk formulas were acceptable contents of biotin compared with component specification and standards for nutrition labeling. The standard operating procedures were prepared for biotin to provide experimental information and to strengthen the management of nutrient in milk formula.

Development of relative radiometric calibration system for in-situ measurement spectroradiometers (현장관측용 분광 광도계의 상대 검교정 시스템 개발)

  • Oh, Eunsong;Ahn, Ki-Beom;Kang, Hyukmo;Cho, Seong-Ick;Park, Young-Je
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.4
    • /
    • pp.455-464
    • /
    • 2014
  • After launching the Geostationary Ocean Color Imager (GOCI) on June 2010, field campaigns were performed routinely around Korean peninsula to collect in-situ data for calibration and validation. Key measurements in the campaigns are radiometric ones with field radiometers such as Analytical Spectral Devices FieldSpec3 or TriOS RAMSES. The field radiometers must be regularly calibrated. We, in the paper, introduce the optical laboratory built in KOSC and the relative calibration method for in-situ measurement spectroradiometer. The laboratory is equipped with a 20-inch integrating sphere (USS-2000S, LabSphere) in 98% uniformity, a reference spectrometer (MCPD9800, Photal) covering wavelengths from 360 nm to 1100 nm with 1.6 nm spectral resolution, and an optical table ($3600{\times}1500{\times}800mm^3$) having a flatness of ${\pm}0.1mm$. Under constant temperature and humidity maintainance in the room, the reference spectrometer and the in-situ measurement instrument are checked with the same light source in the same distance. From the test of FieldSpec3, we figured out a slight difference among in-situ instruments in blue band range, and also confirmed the sensor spectral performance was changed about 4.41% during 1 year. These results show that the regular calibrations are needed to maintain the field measurement accuracy and thus GOCI data reliability.

Current Wheat Quality Criteria and Inspection Systems of Major Wheat Producing Countries (밀 품질평가 현황과 검사제도)

  • 이춘기;남중현;강문석;구본철;김재철;박광근;박문웅;김용호
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.47
    • /
    • pp.63-94
    • /
    • 2002
  • On the purpose to suggest an advanced scheme in assessing the domestic wheat quality, this paper reviewed the inspection systems of wheat in major wheat producing countries as well as the quality criteria which are being used in wheat grading and classification. Most wheat producing countries are adopting both classifications of class and grade to provide an objective evaluation and an official certification to their wheat. There are two main purposes in the wheat classification. The first objectives of classification is to match the wheat with market requirements to maximize market opportunities and returns to growers. The second is to ensure that payments to glowers aye made on the basis of the quality and condition of the grain delivered. Wheat classes has been assigned based on the combination of cultivation area, seed-coat color, kernel and varietal characteristics that are distinctive. Most reputable wheat marketers also employ a similar approach, whereby varieties of a particular type are grouped together, designed by seed coat colour, grain hardness, physical dough properties, and sometimes more precise specification such as starch quality, all of which are genetically inherited characteristics. This classification in simplistic terms is the categorization of a wheat variety into a commercial type or style of wheat that is recognizable for its end use capabilities. All varieties registered in a class are required to have a similar end-use performance that the shipment be consistent in processing quality, cargo to cargo and year to year, Grain inspectors have historically determined wheat classes according to visual kernel characteristics associated with traditional wheat varieties. As well, any new wheat variety must not conflict with the visual distinguishability rule that is used to separate wheats of different classes. Some varieties may possess characteristics of two or more classes. Therefore, knowledge of distinct varietal characteristics is necessary in making class determinations. The grading system sets maximum tolerance levels for a range of characteristics that ensure functionality and freedom from deleterious factors. Tests for the grading of wheat include such factors as plumpness, soundness, cleanliness, purity of type and general condition. Plumpness is measured by test weight. Soundness is indicated by the absence or presence of musty, sour or commercially objectionable foreign odors and by the percentage of damaged kernels that ave present in the wheat. Cleanliness is measured by determining the presence of foreign material after dockage has been removed. Purity of class is measured by classification of wheats in the test sample and by limitation for admixtures of different classes of wheat. Moisture does not influence the numerical grade. However, it is determined on all shipments and reported on the official certificate. U.S. wheat is divided into eight classes based on color, kernel Hardness and varietal characteristics. The classes are Durum, Hard Red Spring, Hard Red Winter, Soft Red Winter, Hard White, soft White, Unclassed and Mixed. Among them, Hard Red Spring wheat, Durum wheat, and Soft White wheat are further divided into three subclasses, respectively. Each class or subclass is divided into five U.S. numerical grades and U.S. Sample grade. Special grades are provided to emphasize special qualities or conditions affecting the value of wheat and are added to and made a part of the grade designation. Canadian wheat is also divided into fourteen classes based on cultivation area, color, kernel hardness and varietal characteristics. The classes have 2-5 numerical grades, a feed grade and sample grades depending on class and grading tolerance. The Canadian grading system is based mainly on visual evaluation, and it works based on the kernel visual distinguishability concept. The Australian wheat is classified based on geographical and quality differentiation. The wheat grown in Australia is predominantly white grained. There are commonly up to 20 different segregations of wheat in a given season. Each variety grown is assigned a category and a growing areas. The state governments in Australia, in cooperation with the Australian Wheat Board(AWB), issue receival standards and dockage schedules annually that list grade specifications and tolerances for Australian wheat. AWB is managing "Golden Rewards" which is designed to provide pricing accuracy and market signals for Australia's grain growers. Continuous payment scales for protein content from 6 to 16% and screenings levels from 0 to 10% based on varietal classification are presented by the Golden Rewards, and the active payment scales and prices can change with market movements.movements.

Blood Pressure Cuff Bladders Tailored For Koreans (한국인 맞춤형 혈압계 커프 블래더)

  • Hwang, Lark Hoon;Park, Woo Sung;Na, Seung Kwon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.9
    • /
    • pp.822-829
    • /
    • 2013
  • Hypertension is one of the most common clinical diseases, with an increasing prevalence globally. Hypertension triggers various harmful consequences and affects multiple organs. Life-long care may be required in some cases. According to the Korea Center for Disease Control and Prevention, the prevalence of hypertension is gradually increasing. A 2011 survey revealed that 28.9% of Korean adults had hypertension. The prevalence rates were slightly higher among men than women. Accurate measurement of blood pressure(BP) is crucial to classify patients, to identify BP-related risks, and to inform correct treatment. For accurate blood pressure measurement, the use of a cuff bladder size appropriate for the mid-upper arm circumference(MUAC) is essential. Incorrect sized cuff bladder is one of the main causes of equipment error affecting sphygmomanometer accuracy. When commercial sphygmomanometers were examined, the cuff bladders differed from the dimensions specified in the ISO 81060-1:2007 standards. Undercuffing is responsible for a spurious overestimation of BP in patients with large arms leading to overdiagnosis of hypertension, whereas overcuffing (that is, use of relatively large cuffs with small arms), may be responsible for an opposite problem, leading to erroneous underestimation of BP levels. The cuff bladder sizes recommended by the American Heart Association(AHA) are an arm circumference(AC) of 17-25 cm for small-sized adults, AC of 24-32 cm for adults, AC of 32-42 cm for normal-sized adults, and AC of 42-50 cm for obese adults. In contrast, the AC of Korean adults ranges from 23-31 cm, belonging to a single type of adult bladder. Three types of bladders are necessary for Korean adults with an AC of 23-31cm. Hospitals often use one or two differently-sized Western cuffs for adult patients, which can yield inaccurate BP determinations. Cuff bladders with dimensions based on anthropometric reference data obtained from Koreans will aid hospitals to measure BP more accurately.

Applying Meta-model Formalization of Part-Whole Relationship to UML: Experiment on Classification of Aggregation and Composition (UML의 부분-전체 관계에 대한 메타모델 형식화 이론의 적용: 집합연관 및 복합연관 판별 실험)

  • Kim, Taekyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.99-118
    • /
    • 2015
  • Object-oriented programming languages have been widely selected for developing modern information systems. The use of concepts relating to object-oriented (OO, in short) programming has reduced efforts of reusing pre-existing codes, and the OO concepts have been proved to be a useful in interpreting system requirements. In line with this, we have witnessed that a modern conceptual modeling approach supports features of object-oriented programming. Unified Modeling Language or UML becomes one of de-facto standards for information system designers since the language provides a set of visual diagrams, comprehensive frameworks and flexible expressions. In a modeling process, UML users need to consider relationships between classes. Based on an explicit and clear representation of classes, the conceptual model from UML garners necessarily attributes and methods for guiding software engineers. Especially, identifying an association between a class of part and a class of whole is included in the standard grammar of UML. The representation of part-whole relationship is natural in a real world domain since many physical objects are perceived as part-whole relationship. In addition, even abstract concepts such as roles are easily identified by part-whole perception. It seems that a representation of part-whole in UML is reasonable and useful. However, it should be admitted that the use of UML is limited due to the lack of practical guidelines on how to identify a part-whole relationship and how to classify it into an aggregate- or a composite-association. Research efforts on developing the procedure knowledge is meaningful and timely in that misleading perception to part-whole relationship is hard to be filtered out in an initial conceptual modeling thus resulting in deterioration of system usability. The current method on identifying and classifying part-whole relationships is mainly counting on linguistic expression. This simple approach is rooted in the idea that a phrase of representing has-a constructs a par-whole perception between objects. If the relationship is strong, the association is classified as a composite association of part-whole relationship. In other cases, the relationship is an aggregate association. Admittedly, linguistic expressions contain clues for part-whole relationships; therefore, the approach is reasonable and cost-effective in general. Nevertheless, it does not cover concerns on accuracy and theoretical legitimacy. Research efforts on developing guidelines for part-whole identification and classification has not been accumulated sufficient achievements to solve this issue. The purpose of this study is to provide step-by-step guidelines for identifying and classifying part-whole relationships in the context of UML use. Based on the theoretical work on Meta-model Formalization, self-check forms that help conceptual modelers work on part-whole classes are developed. To evaluate the performance of suggested idea, an experiment approach was adopted. The findings show that UML users obtain better results with the guidelines based on Meta-model Formalization compared to a natural language classification scheme conventionally recommended by UML theorists. This study contributed to the stream of research effort about part-whole relationships by extending applicability of Meta-model Formalization. Compared to traditional approaches that target to establish criterion for evaluating a result of conceptual modeling, this study expands the scope to a process of modeling. Traditional theories on evaluation of part-whole relationship in the context of conceptual modeling aim to rule out incomplete or wrong representations. It is posed that qualification is still important; but, the lack of consideration on providing a practical alternative may reduce appropriateness of posterior inspection for modelers who want to reduce errors or misperceptions about part-whole identification and classification. The findings of this study can be further developed by introducing more comprehensive variables and real-world settings. In addition, it is highly recommended to replicate and extend the suggested idea of utilizing Meta-model formalization by creating different alternative forms of guidelines including plugins for integrated development environments.

Lung cancer, chronic obstructive pulmonary disease and air pollution (대기오염에 의한 폐암 및 만성폐색성호흡기질환 -개인 흡연력을 보정한 만성건강영향평가-)

  • Sung, Joo-Hon;Cho, Soo-Hun;Kang, Dae-Hee;Yoo, Keun-Young
    • Journal of Preventive Medicine and Public Health
    • /
    • v.30 no.3 s.58
    • /
    • pp.585-598
    • /
    • 1997
  • Background : Although there are growing concerns about the adverse health effect of air pollution, not much evidence on health effect of current air pollution level had been accumulated yet in Korea. This study was designed to evaluate the chronic health effect of ai. pollution using Korean Medical Insurance Corporation (KMIC) data and air quality data. Medical insurance data in Korea have some drawback in accuracy, but they do have some strength especially in their national coverage, in having unified ID system and individual information which enables various data linkage and chronic health effect study. Method : This study utilized the data of Korean Environmental Surveillance System Study (Surveillance Study), which consist of asthma, acute bronchitis, chronic obstructive pulmonary diseases (COPD), cardiovascular diseases (congestive heart failure and ischemic heart disease), all cancers, accidents and congenital anomaly, i. e., mainly potential environmental diseases. We reconstructed a nested case-control study wit5h Surveillance Study data and air pollution data in Korea. Among 1,037,210 insured who completed? questionnaire and physical examination in 1992, disease free (for chronic respiratory disease and cancer) persons, between the age of 35-64 with smoking status information were selected to reconstruct cohort of 564,991 persons. The cohort was followed-up to 1995 (1992-5) and the subjects who had the diseases in Surveillance Study were selected. Finally, the patients, with address information and available air pollution data, left to be 'final subjects' Cases were defined to all lung cancer cases (424) and COPD admission cases (89), while control groups are determined to all other patients than two case groups among 'final subjects'. That is, cases are putative chronic environmental diseases, while controls are mainly acute environmental diseases. for exposure, Air quality data in 73 monitoring sites between 1991 - 1993 were analyzed to surrogate air pollution exposure level of located areas (58 areas). Five major air pollutants data, TSP, $O_3,\;SO_2$, CO, NOx was available and the area means were applied to the residents of the local area. 3-year arithmetic mean value, the counts of days violating both long-term and shot-term standards during the period were used as indices of exposure. Multiple logistic regression model was applied. All analyses were performed adjusting for current and past smoking history, age, gender. Results : Plain arithmetic means of pollutants level did not succeed in revealing any relation to the risk of lung cancer or COPD, while the cumulative counts of non-at-tainment days did. All pollutants indices failed to show significant positive findings with COPD excess. Lung cancer risks were significantly and consistently associated with the increase of $O_3$ and CO exceedance counts (to corrected error level -0.017) and less strongly and consistently with $SO_2$ and TSP. $SO_2$ and TSP showed weaker and less consistent relationship. $O_3$ and CO were estimated to increase the risks of lung cancer by 2.04 and 1.46 respectively, the maximal probable risks, derived from comparing more polluted area (95%) with cleaner area (5%). Conclusions : Although not decisive due to potential misclassication of exposure, these results wert drawn by relatively conservative interpretation, and could be used as an evidence of chronic health effect especially for lung cancer. $O_3$ might be a candidate for promoter of lung cancer, while CO should be considered as surrogated measure of motor vehicle emissions. The control selection in this study could have been less appropriate for COPD, and further evaluation with another setting might be necessary.

  • PDF

Simultaneous Determination of Aminoglycoside Antibiotics in Meat using Liquid Chromatography Tandem Mass Spectrometry (LC-MS/MS를 이용한 육류 중 아미노글리코사이드계 항생제 9종의 동시분석 및 적용성 검증)

  • Cho, Yoon-Jae;Choi, Sun-Ju;Kim, Myeong-Ae;Kim, MeeKyung;Yoon, Su-Jin;Chang, Moon-Ik;Lee, Sang-Mok;Kim, Hee-Jeong;Jeong, Jiyoon;Rhee, Gyu-Seek;Lee, Sang-Jae
    • Journal of Food Hygiene and Safety
    • /
    • v.29 no.2
    • /
    • pp.123-130
    • /
    • 2014
  • A simultaneous determination was developed for 9 aminoglycoside antibiotics (amikacin, apramycin, dihydrostreptomycin, gentamicin, hygromycin B, kanamycin, neomycin, spectinomycin, and streptomycin) in meat by liquid chromatography tandem mass spectrometry (LC-MS/MS). Each parameter was established by multiple reaction monitoring in positive ion mode. The developed method was validated for specificity, linearity, accuracy, and precision based on CODEX validation guideline. Linearity was over 0.98 with calibration curves of the mixed standards. Recovery of 9 aminoglycosides ranged on 60.5~114% for beef, 60.1~112% for pork and 63.8~131% for chicken. The limit of detection (LOD) and limit of quantification (LOQ) were 0.001~0.009 mg/kg and 0.006~0.03 mg/kg, respectively in livestock products including beef, pork and chicken. This study also performed survey of residual aminoglycoside antibiotics for 193 samples of beef, pork and chicken collected from 9 cities in Korea. Aminoglycosides were not found in any of the samples.