• Title/Summary/Keyword: accurate solution

Search Result 1,197, Processing Time 0.032 seconds

A Comparison Study of Alkalinity and Total Carbon Measurements in $CO_2$-rich Water (탄산수의 알칼리도 및 총 탄소 측정방법 비교 연구)

  • Jo, Min-Ki;Chae, Gi-Tak;Koh, Dong-Chan;Yu, Yong-Jae;Choi, Byoung-Young
    • Journal of Soil and Groundwater Environment
    • /
    • v.14 no.3
    • /
    • pp.1-13
    • /
    • 2009
  • Alkalinity and total carbon contents were measured by acid neutralizing titration (ANT), back titration (BT), gravitational weighing (GW), non-dispersive infrared-total carbon (NDIR-TC) methods for assessing precision and accuracy of alkalinity and total carbon concentration in $CO_2$-rich water. Artificial $CO_2$-rich water(ACW: pH 6.3, alkalinity 68.8 meq/L, $HCO_3^-$ 2,235 mg/L) was used for comparing the measurements. When alkalinity measured in 0 hr, percent errors of all measurement were 0~12% and coefficient of variation were less than 4%. As the result of post-hoc analysis after repeated measure analysis of variance (RM-AMOVA), the differences between the pair of methods were not significant (within confidence level of 95%), which indicates that the alkalinity measured by any method could be accurate and precise when it measured just in time of sampling. In addition, alkalinity measured by ANT and NDIR-TC were not change after 24 and 48 hours open to atmosphere, which can be explained by conservative nature of alkalinity although $CO_2$ degas from ACW. On the other hand, alkalinity measured by BT and GW increased after 24 and 48 hours open to atmosphere, which was caused by relatively high concentration of measured total carbon and increasing pH. The comparison between geochemical modeling of $CO_2$ degassing and observed data showed that pH of observed ACW was higher than calculated pH. This can be happen when degassed $CO_2$ does not come out from the solution and/or exist in solution as $CO_{2(g)}$ bubble. In that case, $CO_{2(g)}$ bubble doesn't affect the pH and alkalinity. Thus alkalinity measured by ANT and NDIR-TC could not detect the $CO_2$ bubble although measured alkalinity was similar to the calculated alkalinity. Moreover, total carbon measured by ANT and NDIR-TC could be underestimated. Consequently, it is necessary to compare the alkalinity and total carbon data from various kind of methods and interpret very carefully. This study provide technical information of measurement of dissolve $CO_2$ from $CO_2$-rich water which could be natural analogue of geologic sequestration of $CO_2$.

Environmental Impact Assessment and Evaluation of Environmental Risks (환경영향평가와 환경위험의 평가)

  • Niemeyer, Adelbert
    • Journal of Environmental Impact Assessment
    • /
    • v.4 no.3
    • /
    • pp.41-48
    • /
    • 1995
  • In former times the protection of our environment didn't play an important role due to the fact that emissions and effluents were not considered as serious impacts. However, opinions and scientific measurements meanwhile confirmed that the impacts are more serious than expected. Thus measures to protect our earth has to be taken into consideration. A part of these measures in the Environmental Impact Assessment (EIA). One of the most important parts of the EIA is the collection of basic datas and the following evaluation. Experience out of the daily business of Gerling Consulting Group shows that the content of the EIA has to be revised and enlarged in certain fields. The historical development demonstrated that in areas in which the population and the industrial activities reached high concentration there is a high necessity to develop strict environmental laws and regulations. Maximum values of the concentration of hazardous materials were fixed concerning the emission into and water. Companies not following these regulations were punished. The total amount of environmental offences increased rapidly during the last decade, at least in Germany. During this development the public consciousness concerning environmental affairs increased as well in the industrialized countries. But it could clearly be seen that the development in the field of environmental protection went into the wrong direction. The technologies to protect the environment became more and more sophisticated and terms as: "state of the art" guided more and more to lower emissions, Filtertechnologies and wastewater treatment for example reached a high technical level-but all these sophisticated technologies has one and the same characteristic: they were end-of-the pipe solutions. A second effect was that this kind of environmental protection costs a lot of money. High investments are necessary to reduce the dust emission by another ppm! Could this be the correct way? In Germany the discussion started that the environmental laws reduce the attractivity to invest or to enlarge existing investments within the country. Other countries seem to be not so strict with controlling the environmental laws which means it's simply cheaper to produce in Portugal or Greece. Everybody however knows that this is not the correct way and does not solve the environmental problems. Meanwhile the general picture changes a little bit and we think it changes into the correct direction "End-of-the-pipe" solutions are still necessary but this word received a real negative touch and nobody wants to be brought into connection with this word received a real negative touch and nobody wants to be brought into connection with this word especially in connection with environmental management and safety. Modern actual environmental management starts in a different way. Thoughts about emissions start in the very beginning of the production, they start with the design of the product and modification of traditional modes of production. Basis of these ideas are detailed analyses of products and processes. Due to the above mentioned facts that the public environmental consciousness changed dramatically a continous environmental improvement of each single production plant has to be guarantied. This question is already an important question of the EIA. But it was never really checked in a wholistic approach. Environmental risks have to be taken into considerations during the execution of an EIA. This means that the environmental risks have to be reduced down to a capable risk-level. Environmental risks have to be considered within the phase of planning, during the operation of a plant and after shut down. The experience shows that most of the environmental relevant accidents were and caused by human fault. Even in highly protected plants the human risk-factor can not be excluded during evaluation of the risk-potential. Thus the approach of an EIA has to regard technical evaluations as well as organizational thoughts and the human factor. An environmental risk is a threat to the environment. An analysis of the risk concerning the organizational and human aspect however never was properly executed during an EIA. A possible solution could be to use an instrument as the actual EMAS (Environmental Management System) of the EC for more accurate evaluation of the impact to the environment during an EIA. Organizations or investors could demonstrate by an approved EMAS or even by showing their installment of EMAS that not only the technical level of the planned investment meets the requested standards but as well the actual or planned management is able to reduce the environmental impact down to a bearable level.

  • PDF

Changes in Anthocyanin Content of Aronia (Aronia melancocarpa) by Processing Conditions (물리적 처리조건 변화에 따른 아로니아(Aronia melancocarpa) 유래 안토시아닌 함량변화 특성)

  • Kim, Bo Mi;Lee, Kyung Min;Jung, In Chan
    • Korean Journal of Plant Resources
    • /
    • v.30 no.2
    • /
    • pp.152-159
    • /
    • 2017
  • The purpose of this study was to obtain basic data for using Aronia as a functional food material. The composition of anthocyanin was characterized and quantitated by LC-MS/MS, HPLC, and UV-VIS spectrophotometer techniques, respectively. The anthocyanin content was analyzed by temperature, time, pH, and the addition of citric acid. The UV-VIS spectrophotometer used for analysis of anthocyanin is less accurate than the LC-MS/MS method used in recent years. In the past, cyanidin-3-Glucoside was reported to be a major anthocyanin that contains Aronia. However, LC-MS/MS analysis in this study confirmed cyanidin-3-galactoside to be the major compound. The anthocyanin content of the Aronia powder began to decrease sharply at a temperature of $65^{\circ}C$ or higher when heated for 24 hours. In an aqueous solution of Aronia, the anthocyanin content was reduced by 50% at $65^{\circ}C$ for 10 hours and decreased by 85% at $85^{\circ}C$ within 10 hours. Above pH 8, the anthocyanin content was reduced by more than 50%. The results of this study will provide useful information to maintain anthocyanin content in the manufacturing process of Aronia. It could also be used to ensure the stability of anthocyanins in similar species of berries.

A Multimodal Profile Ensemble Approach to Development of Recommender Systems Using Big Data (빅데이터 기반 추천시스템 구현을 위한 다중 프로파일 앙상블 기법)

  • Kim, Minjeong;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.93-110
    • /
    • 2015
  • The recommender system is a system which recommends products to the customers who are likely to be interested in. Based on automated information filtering technology, various recommender systems have been developed. Collaborative filtering (CF), one of the most successful recommendation algorithms, has been applied in a number of different domains such as recommending Web pages, books, movies, music and products. But, it has been known that CF has a critical shortcoming. CF finds neighbors whose preferences are like those of the target customer and recommends products those customers have most liked. Thus, CF works properly only when there's a sufficient number of ratings on common product from customers. When there's a shortage of customer ratings, CF makes the formation of a neighborhood inaccurate, thereby resulting in poor recommendations. To improve the performance of CF based recommender systems, most of the related studies have been focused on the development of novel algorithms under the assumption of using a single profile, which is created from user's rating information for items, purchase transactions, or Web access logs. With the advent of big data, companies got to collect more data and to use a variety of information with big size. So, many companies recognize it very importantly to utilize big data because it makes companies to improve their competitiveness and to create new value. In particular, on the rise is the issue of utilizing personal big data in the recommender system. It is why personal big data facilitate more accurate identification of the preferences or behaviors of users. The proposed recommendation methodology is as follows: First, multimodal user profiles are created from personal big data in order to grasp the preferences and behavior of users from various viewpoints. We derive five user profiles based on the personal information such as rating, site preference, demographic, Internet usage, and topic in text. Next, the similarity between users is calculated based on the profiles and then neighbors of users are found from the results. One of three ensemble approaches is applied to calculate the similarity. Each ensemble approach uses the similarity of combined profile, the average similarity of each profile, and the weighted average similarity of each profile, respectively. Finally, the products that people among the neighborhood prefer most to are recommended to the target users. For the experiments, we used the demographic data and a very large volume of Web log transaction for 5,000 panel users of a company that is specialized to analyzing ranks of Web sites. R and SAS E-miner was used to implement the proposed recommender system and to conduct the topic analysis using the keyword search, respectively. To evaluate the recommendation performance, we used 60% of data for training and 40% of data for test. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. A widely used combination metric called F1 metric that gives equal weight to both recall and precision was employed for our evaluation. As the results of evaluation, the proposed methodology achieved the significant improvement over the single profile based CF algorithm. In particular, the ensemble approach using weighted average similarity shows the highest performance. That is, the rate of improvement in F1 is 16.9 percent for the ensemble approach using weighted average similarity and 8.1 percent for the ensemble approach using average similarity of each profile. From these results, we conclude that the multimodal profile ensemble approach is a viable solution to the problems encountered when there's a shortage of customer ratings. This study has significance in suggesting what kind of information could we use to create profile in the environment of big data and how could we combine and utilize them effectively. However, our methodology should be further studied to consider for its real-world application. We need to compare the differences in recommendation accuracy by applying the proposed method to different recommendation algorithms and then to identify which combination of them would show the best performance.

Future Direction of National Health Insurance (국민건강보험 발전방향)

  • Park, Eun-Cheol
    • Health Policy and Management
    • /
    • v.27 no.4
    • /
    • pp.273-275
    • /
    • 2017
  • It has been forty years since the implementation of National Health Insurance (NHI) in South Korea. Following the 1977 legislature mandating medical insurance for employees and dependents in firms with more than 500 employees, South Korea expanded its health insurance to urban residents in 1989. Resultantly, total expenses of the National Health Insurance Service (NHIS) have greatly increased from 4.5 billion won in 1977 to 50.89 trillion won in 2016. With multiple insurers merging into the NHI system in 2000, a single-payer healthcare system emerged, along with separation policy of prescribing and dispensing. Following such reform, an emerging financial crisis required injections from the National Health Promotion Fund. Forty years following the introduction of the NHI system, both praise and criticism have been drawn. In just 12 years, the NHI achieved the fastest health population coverage in the world. Current medical expenditure is not high relative to the rest of the Organization for Economic Cooperation and Development. The quality of acute care in Korea is one of the best in the world. There is no sign of delayed diagnosis and/or treatment for most diseases. However, the NHI has been under-insured, requiring high-levels of out-of-pocket money from patients and often causing catastrophic medical expenses. Furthermore, the current environmental circumstances of the NHI are threatening its sustainability. Low birth rate decline, as well as slow economic growth, will make sustainment of the current healthcare system difficult in the near future. An aging population will increase the amount of medical expenditure required, especially with the baby-boomer generation of those born between 1955 and 1965. Meanwhile, there is always the problem of unification for the Korean Peninsula, and what role the health insurance system will have to play when it occurs. In the presidential election, health insurance is a main issue; however, there is greater focus on expansion and expenditure than revenue. Many aspects of Korea's NHI system (1977) were modeled after the German (1883) and Japanese (1922) systems. Such systems were created during an era where infections disease control was most urgent and thus, in the current non-communicable disease (NCD) era, must be redesigned. The Korean system, which is already forty years old, must be redesigned completely. Although health insurance benefit expansion is necessary, financial measures, as well as moral hazard control measures, must also be considered. Ultimately, there are three aspects that we must consider when attempting redesign of the system. First, the health security system must be reformed. NHI and Medical Aid must be amalgamated into one system for increased effectiveness and efficiency of the system. Within the single insurer system of the NHI must be an internal market for maximum efficiency. The NHIS must be separated into regions so that regional organizers have greater responsibility over their actions. Although insurance must continue to be imposed nationally, risk-adjustment must be distributed regionally and assessed by different regional systems. Second, as a solution for the decreasing flow of insurance revenue, low premium level must be increased to an appropriate level. Likewise, the national reserve fund (No. 36, National Health Insurance Act) must be enlarged for re-unification preparation. Third, there must be revolutionary reform of benefit package. The current system built a focus on communicable diseases which is inappropriate in this NCD era. Medical benefits must not be one-time events but provide chronic disease management. Chronic care models, accountable care organization, patient-centered medical homes, and other systems that introduce various benefit packages for beneficiaries must be implemented. The reimbursement system of medical costs should be introduced to various systems for different types of care, as is the case with part C (Medicare Advantage Program) of America's Medicare system that substitutes part A and part B. Pay for performance must be expanded so that there is not only improvement in quality of care but also medical costs. Moreover, beneficiaries of the NHI system must be aware of the amount of their expenditure through a deductible payment system so that spending can be profiled and monitored. The Moon Jae-in Government has announced its plans to expand the NHI system; however, it is important that a discussion forum is created so that more accurate analysis of the NHI, its environments, and current status of health care system, can take place for reforming NHI.

Consideration About the Bacterial Endotoxin Test Showing False Positive Test Result When Performing LAL Test (LAL Test에서 위양성을 나타내는 원인들에 대한 고찰)

  • Hwang, Ki-Young;Cho, Yong-Hyun;Lee, Yong-Suk;Kim, Hyung-Woo;Lee, Hong-Jae;Kim, Hyun-Ju
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.3
    • /
    • pp.156-158
    • /
    • 2009
  • Purpose: Since radiopharmaceuticals are intended for human administration, it is imperative that we should undergo quality control very strictly. Now almost all the PET laboratories have adopted Bacterial Endotoxin Test as the stand quality control method to monitor whether pyrogen is free or not in the product vial containing crude solution. The aim of this study is to find out the reason why false positive result is observed when using commercially available test vial. Materials and Methods: For this experiment, we used commercially available single test kit (Associates of Cape Code. Inc. USA) and we made pH samples by mixing each buffer whose pH ranges are 1.0 to 12.0. Otherwise we made Ethanol samples diluted with distilled water. After making test samples, it added 0.2 mL to the test vial. Assay mixture in the test vial was incubated in a water bath (Chang Shin Co. KOR) for 60 min at $35{\pm}2^{\circ}C$. Results: After incubation period ($60{\pm}1^{\circ}C$), we inverted the test vial about $180^{\circ}$ To know what pH and how many percentage of Ethanol (Fisher Scientific Korea. Ltd) will affect the reaction. With pH buffer, false positive result was observed at pH 1.0 to 5.0 and 7.7 to 12.6 but at pH 5.2 to.7.5, the test results show negative. It's very strange that we couldn't observe negative test result with Tris buffer at pH 8.4, 8.6, 8.8, 9.0. in other case Ethanol, the test result was seen with 5 to 10% Ethanol. But to my surprise we could see very thick gel formation with 100% Ethanol. Conclusions: In this study, we could notice that pH which is too much acidic or alkalic or high concentrated Ethanol would affect Bacterial Endotoxin Test result. As you know, LAL test is sensitive and very reliable method. Therefore, we are needed to elicit the accurate test result as possible as we can.

  • PDF

Validating a New Approach to Quantify Posterior Corneal Curvature in Vivo (각막 후면 지형 측정을 위한 새로운 방법의 신뢰도 분석 및 평가)

  • Yoon, Jeong Ho;Avudainayagam, Kodikullam;Avudainayagam, Chitralekha;Swarbrick, Helen A.
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.17 no.2
    • /
    • pp.223-232
    • /
    • 2012
  • Purpose: Validating a new research method to determine posterior corneal curvature and asphericity(Q) in vivo, based on measurements of anterior corneal topography and corneal thickness. Methods: Anterior corneal topographic data, derived from the Medmont E300 corneal topographer, and total corneal thickness data measured along the horizontal corneal meridian using the Holden-Payor optical pachometer, were used to calculate the anterior and posterior corneal apical radii of curvature and Q. To calculate accurate total corneal thickness the local radius of anterior corneal curvature, and an exact solution for the relationship between real and apparent thickness were taken into consideration. This method differs from previous approach. An elliptical curve for anterior and posterior cornea were calculated by using best fit algorism of the anterior corneal topographic data and derived coordinates of the posterior cornea respectively. For validation of the calculations of the posterior corneal topography, ten polymethyl methacrylate (PMMA) lenses and right eyes of five adult subjects were examined. Results: The mean absolute accuracy (${\pm}$standard deviation(SD)) of calculated posterior apical radius and Q of ten PMMA lenses was $0.053{\pm}0.044mm$ (95% confidence interval (CI) -0.033 to 0.139), and $0.10{\pm}0.10$ (95% CI -0.10 to 0.31) respectively. The mean absolute repeatability coefficient (${\pm}SD$) of the calculated posterior apical radius and Q of five human eyes was $0.07{\pm}0.06mm$ (95% CI -0.05 to 0.19) and $0.09{\pm}0.07$ (95% CI -0.05 to 0.23), respectively. Conclusions: The result shows that acceptable accuracy in calculations of posterior apical radius and Q was achieved. This new method shows promise for application to the living human cornea.

The Investigation Image-guided Radiation Therapy of Bladder Cancer Patients (방광암 환자의 영상유도 방사선치료에 관한 고찰)

  • Bae, Seong-Soo;Bae, Sun-Myoung;Kim, Jin-San;Kang, Tae-Young;Back, Geum-Mun;Kwon, Kyung-Tae
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.24 no.1
    • /
    • pp.39-43
    • /
    • 2012
  • Purpose: In hospital image-guided radiation therapy in patients with bladder cancer to enhance the reproducibility of the appropriate amount, depending on the patient's condition, and image-guided injection of saline system (On-Board Imager system, OBI, VARIAN, USA) three of the Cone-Beam CT dimensional matching (3D-3D matching) to be the treatment. In this study, the treatment of patients with bladder cancer at Cone-Beam CT image obtained through the analysis of the bones based matching and matching based on the bladder to learn about the differences, the bladder's volume change injected saline solution by looking at the bladder for the treatment of patients with a more appropriate image matching is to assess how the discussion. Materials and Methods: At our hospital from January 2009 to April 2010 admitted for radiation therapy patients, 7 patients with bladder cancer using a Folly catheter of residual urine in the bladder after removing the amount determined according to individual patient enough to inject saline CT-Sim was designed after the treatment plan. After that, using OBI before treatment to confirm position with Cone-Beam CT scan was physician in charge of matching was performed in all patients. CBCT images using a total of 45 bones, bladder, based on image matching and image matching based on the difference were analyzed. In addition, changes in bladder volume of Eclipse (version 8.0, VARIAN, USA) persuaded through. Results: Bones, one based image matching based on the bladder and re-matching the X axis is the difference between the average $3{\pm}2mm$, Y axis, $1.8{\pm}1.3mm$, Z-axis travel distance is $2.3{\pm}1.7mm$ and the overall $4.8{\pm}2.0mm$, respectively. The volume of the bladder compared to the baseline showed a difference of $4.03{\pm}3.97%$. Conclusion: Anatomical location and nature of the bladder due to internal movement of the bones, even after matching with the image of the bladder occurred in different locations. In addition, the volume of saline-filled bladder showed up the difference between the 4.03 percent, but matched in both images to be included in the planned volumes were able to confirm. Thus, after injection of saline into the bladder base by providing a more accurate image matching will be able to conduct therapy.

  • PDF

Intelligent Optimal Route Planning Based on Context Awareness (상황인식 기반 지능형 최적 경로계획)

  • Lee, Hyun-Jung;Chang, Yong-Sik
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.117-137
    • /
    • 2009
  • Recently, intelligent traffic information systems have enabled people to forecast traffic conditions before hitting the road. These convenient systems operate on the basis of data reflecting current road and traffic conditions as well as distance-based data between locations. Thanks to the rapid development of ubiquitous computing, tremendous context data have become readily available making vehicle route planning easier than ever. Previous research in relation to optimization of vehicle route planning merely focused on finding the optimal distance between locations. Contexts reflecting the road and traffic conditions were then not seriously treated as a way to resolve the optimal routing problems based on distance-based route planning, because this kind of information does not have much significant impact on traffic routing until a a complex traffic situation arises. Further, it was also not easy to take into full account the traffic contexts for resolving optimal routing problems because predicting the dynamic traffic situations was regarded a daunting task. However, with rapid increase in traffic complexity the importance of developing contexts reflecting data related to moving costs has emerged. Hence, this research proposes a framework designed to resolve an optimal route planning problem by taking full account of additional moving cost such as road traffic cost and weather cost, among others. Recent technological development particularly in the ubiquitous computing environment has facilitated the collection of such data. This framework is based on the contexts of time, traffic, and environment, which addresses the following issues. First, we clarify and classify the diverse contexts that affect a vehicle's velocity and estimates the optimization of moving cost based on dynamic programming that accounts for the context cost according to the variance of contexts. Second, the velocity reduction rate is applied to find the optimal route (shortest path) using the context data on the current traffic condition. The velocity reduction rate infers to the degree of possible velocity including moving vehicles' considerable road and traffic contexts, indicating the statistical or experimental data. Knowledge generated in this papercan be referenced by several organizations which deal with road and traffic data. Third, in experimentation, we evaluate the effectiveness of the proposed context-based optimal route (shortest path) between locations by comparing it to the previously used distance-based shortest path. A vehicles' optimal route might change due to its diverse velocity caused by unexpected but potential dynamic situations depending on the road condition. This study includes such context variables as 'road congestion', 'work', 'accident', and 'weather' which can alter the traffic condition. The contexts can affect moving vehicle's velocity on the road. Since these context variables except for 'weather' are related to road conditions, relevant data were provided by the Korea Expressway Corporation. The 'weather'-related data were attained from the Korea Meteorological Administration. The aware contexts are classified contexts causing reduction of vehicles' velocity which determines the velocity reduction rate. To find the optimal route (shortest path), we introduced the velocity reduction rate in the context for calculating a vehicle's velocity reflecting composite contexts when one event synchronizes with another. We then proposed a context-based optimal route (shortest path) algorithm based on the dynamic programming. The algorithm is composed of three steps. In the first initialization step, departure and destination locations are given, and the path step is initialized as 0. In the second step, moving costs including composite contexts into account between locations on path are estimated using the velocity reduction rate by context as increasing path steps. In the third step, the optimal route (shortest path) is retrieved through back-tracking. In the provided research model, we designed a framework to account for context awareness, moving cost estimation (taking both composite and single contexts into account), and optimal route (shortest path) algorithm (based on dynamic programming). Through illustrative experimentation using the Wilcoxon signed rank test, we proved that context-based route planning is much more effective than distance-based route planning., In addition, we found that the optimal solution (shortest paths) through the distance-based route planning might not be optimized in real situation because road condition is very dynamic and unpredictable while affecting most vehicles' moving costs. For further study, while more information is needed for a more accurate estimation of moving vehicles' costs, this study still stands viable in the applications to reduce moving costs by effective route planning. For instance, it could be applied to deliverers' decision making to enhance their decision satisfaction when they meet unpredictable dynamic situations in moving vehicles on the road. Overall, we conclude that taking into account the contexts as a part of costs is a meaningful and sensible approach to in resolving the optimal route problem.

Development and Validation of an Analytical Method for Determination of Fungicide Benzovindiflupyr in Agricultural Commodities Using LC-MS/MS (LC-MS/MS를 이용한 농산물 중 살균제 벤조빈디플루피르의 잔류시험법 개발 및 검증)

  • Lim, Seung-Hee;Do, Jung-Ah;Park, Shin-Min;Pak, Won-Min;Yoon, Ji Hye;Kim, Ji Young;Chang, Moon-Ik
    • Journal of Food Hygiene and Safety
    • /
    • v.32 no.4
    • /
    • pp.298-305
    • /
    • 2017
  • Benzovindiflupyr is a new pyrazole carboxamide fungicide that inhibits succinate dehydrogenase of mitochondrial respiratory chain. This study was carried out to develop an analytical method for the determination of benzovindiflupyr residues in agricultural commodities using LC-MS/MS. The benzovindiflupyr residues in samples were extracted by using acetonitrile, partitioned with dichloromethane, and then purified with silica solid phase extraction (SPE) cartridge. Correlation coefficient ($r^2$) of benzovindiflupyr standard solution was 0.99 over the calibration ranges ($0.001{\sim}0.5{\mu}g/mL$). Recovery tests were conducted on 5 representative agricultural commodities (mandarin, green pepper, potato, soybean, and hulled rice) to validate the analytical method. The recoveries ranged from 79.3% to 110.0% and then relative standard deviation (RSD) was less than 9.1%. Also the limit of detection (LOD) and limit of quantification (LOQ) were 0.0005 and 0.005 mg/kg, respectively. The recoveries of interlaboratory validation ranged from 83.4% to 117.3% and the coefficient of variation (CV) was 9.0%. All results were followed with Codex guideline (CAC/GL 40) and Ministry of Food and Safety guideline (MFDS, 2016). The proposed new analytical method proved to be accurate, effective, and sensitive for benzovindiflupyr determination and would be used as an official analytical method.