• Title/Summary/Keyword: Detection accuracy

Search Result 4,083, Processing Time 0.034 seconds

Optimization of Analytical Method for Annatto Pigment in Foods (식품 중 안나토색소 분석법 최적화 연구)

  • Lee, Jiyeon;Park, Juhee;Lee, Jihyun;Suh, Hee-Jae;Lee, Chan
    • Journal of Food Hygiene and Safety
    • /
    • v.36 no.4
    • /
    • pp.298-309
    • /
    • 2021
  • In this study we sought to develop a simultaneous analysis method for cis-bixin and cis-norbixin, the main components, to detect annatto pigment in food. To establish the optimal test method, the HPLC analysis methods of the European Food Safety Authority (EFSA), Japan's Ministry of Health, Labor and Welfare (MHLW), and National Institute of Food and Drug Safety Evaluation (NIFDS) were compared and reviewed. In addition, a new pretreatment method applicable to various foods was developed after selecting conditions for simultaneous high-performance liquid chromatography (HPLC) analysis in consideration of linearity, limit of detection (LOD), limit of quantification (LOQ), and analysis time. The HPLC analysis method of NIFDS showed the best linearity (R2 ≥ 0.999), exhibiting low detection and quantification limits for cis-norbixin and cis-bixin as 0.03, 0.05 ㎍/mL, and 0.097, 0.16 ㎍/mL, respectively. All previously reported pretreatment methods had limitations in various food applications. However, the new pretreatment method showed a high recovery rate for all three main food groups of fish meat and meat products, processed cheese and beverages. This method showed an excellent simultaneous recovery rate of 98% or more for cis-bixin and cis-norbixin. The HPLC analysis method with a new pretreatment method showed high linearity with a coefficient of determination (R2) of 1 for both substances, and the accuracy (recovery rate) and precision (%RSD) were 98% and between 0.4-7.9, respectively. From this result, the optimized analytical method was considered to be very suitable for the simultaneous analysis of cis-bixin and cis-norbixin, two main components of annatto pigment in food.

Development and Validation of Analytical Method and Antioxidant Effect for Berberine and Palmatine in P.amurense (황백의 지표성분 berberine과 palmatine의 분석법 개발과 검증 및 항산화 효능 평가)

  • Jang, Gill-Woong;Choi, Sun-Il;Han, Xionggao;Men, Xiao;Kwon, Hee-Yeon;Choi, Ye-Eun;Park, Byung-Woo;Kim, Jeong-Jin;Lee, Ok-Hwan
    • Journal of Food Hygiene and Safety
    • /
    • v.35 no.6
    • /
    • pp.544-551
    • /
    • 2020
  • The aim of this study was to develop and validate a simultaneous analytical method for berberine and palmatine, which are representative substances of Phellodendron amurense, and to evaluate the antioxidant activity. We evaluated the specificity, linearity, precision, accuracy, limit of detection (LOD), and limit of quantification (LOQ) of analytical methods for berberine and palmatine using high-performance liquid chromatography. Our result showed that the correlation coefficients of the calibration curve for berberine and palmatine exhibited 0.9999. The LODs for berberine and palmatine were 0.32 to 0.35 µg/mL and the LOQs were 0.97 to 1.06 µg/mL, respectively. The inter-day and intra-day precision values for berberine and palmatine were from 0.12 to 1.93 and 0.19 to 2.89%, respectively. The inter-day and intra-day accuracies were 98.43-101.45% and 92.39-100.60%, respectively. In addition, the simultaneous analytical method was validated for the detection of berberine and palmatine. Moreover, we conducted FRAP and NaNO2 scavenging activity assays to measure the antioxidant activities of berberine and palmatine, and both showed antioxidant activity. These results suggest that P.amurense could be a potential natural resource for antioxidant activity and that the efficacy can be confirmed by investigating the content of the berberine and palmatine.

Multi-resolution SAR Image-based Agricultural Reservoir Monitoring (농업용 저수지 모니터링을 위한 다해상도 SAR 영상의 활용)

  • Lee, Seulchan;Jeong, Jaehwan;Oh, Seungcheol;Jeong, Hagyu;Choi, Minha
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_1
    • /
    • pp.497-510
    • /
    • 2022
  • Agricultural reservoirs are essential structures for water supplies during dry period in the Korean peninsula, where water resources are temporally unequally distributed. For efficient water management, systematic and effective monitoring of medium-small reservoirs is required. Synthetic Aperture Radar (SAR) provides a way for continuous monitoring of those, with its capability of all-weather observation. This study aims to evaluate the applicability of SAR in monitoring medium-small reservoirs using Sentinel-1 (10 m resolution) and Capella X-SAR (1 m resolution), at Chari (CR), Galjeon (GJ), Dwitgol (DG) reservoirs located in Ulsan, Korea. Water detected results applying Z fuzzy function-based threshold (Z-thresh) and Chan-vese (CV), an object detection-based segmentation algorithm, are quantitatively evaluated using UAV-detected water boundary (UWB). Accuracy metrics from Z-thresh were 0.87, 0.89, 0.77 (at CR, GJ, DG, respectively) using Sentinel-1 and 0.78, 0.72, 0.81 using Capella, and improvements were observed when CV was applied (Sentinel-1: 0.94, 0.89, 0.84, Capella: 0.92, 0.89, 0.93). Boundaries of the waterbody detected from Capella agreed relatively well with UWB; however, false- and un-detections occurred from speckle noises, due to its high resolution. When masked with optical sensor-based supplementary images, improvements up to 13% were observed. More effective water resource management is expected to be possible with continuous monitoring of available water quantity, when more accurate and precise SAR-based water detection technique is developed.

Mobile Camera-Based Positioning Method by Applying Landmark Corner Extraction (랜드마크 코너 추출을 적용한 모바일 카메라 기반 위치결정 기법)

  • Yoo Jin Lee;Wansang Yoon;Sooahm Rhee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1309-1320
    • /
    • 2023
  • The technological development and popularization of mobile devices have developed so that users can check their location anywhere and use the Internet. However, in the case of indoors, the Internet can be used smoothly, but the global positioning system (GPS) function is difficult to use. There is an increasing need to provide real-time location information in shaded areas where GPS is not received, such as department stores, museums, conference halls, schools, and tunnels, which are indoor public places. Accordingly, research on the recent indoor positioning technology based on light detection and ranging (LiDAR) equipment is increasing to build a landmark database. Focusing on the accessibility of building a landmark database, this study attempted to develop a technique for estimating the user's location by using a single image taken of a landmark based on a mobile device and the landmark database information constructed in advance. First, a landmark database was constructed. In order to estimate the user's location only with the mobile image photographing the landmark, it is essential to detect the landmark from the mobile image, and to acquire the ground coordinates of the points with fixed characteristics from the detected landmark. In the second step, by applying the bag of words (BoW) image search technology, the landmark photographed by the mobile image among the landmark database was searched up to a similar 4th place. In the third step, one of the four candidate landmarks searched through the scale invariant feature transform (SIFT) feature point extraction technique and Homography random sample consensus(RANSAC) was selected, and at this time, filtering was performed once more based on the number of matching points through threshold setting. In the fourth step, the landmark image was projected onto the mobile image through the Homography matrix between the corresponding landmark and the mobile image to detect the area of the landmark and the corner. Finally, the user's location was estimated through the location estimation technique. As a result of analyzing the performance of the technology, the landmark search performance was measured to be about 86%. As a result of comparing the location estimation result with the user's actual ground coordinate, it was confirmed that it had a horizontal location accuracy of about 0.56 m, and it was confirmed that the user's location could be estimated with a mobile image by constructing a landmark database without separate expensive equipment.

Evaluation of Applicability of Sea Ice Monitoring Using Random Forest Model Based on GOCI-II Images: A Study of Liaodong Bay 2021-2022 (GOCI-II 영상 기반 Random Forest 모델을 이용한 해빙 모니터링 적용 가능성 평가: 2021-2022년 랴오둥만을 대상으로)

  • Jinyeong Kim;Soyeong Jang;Jaeyeop Kwon;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1651-1669
    • /
    • 2023
  • Sea ice currently covers approximately 7% of the world's ocean area, primarily concentrated in polar and high-altitude regions, subject to seasonal and annual variations. It is very important to analyze the area and type classification of sea ice through time series monitoring because sea ice is formed in various types on a large spatial scale, and oil and gas exploration and other marine activities are rapidly increasing. Currently, research on the type and area of sea ice is being conducted based on high-resolution satellite images and field measurement data, but there is a limit to sea ice monitoring by acquiring field measurement data. High-resolution optical satellite images can visually detect and identify types of sea ice in a wide range and can compensate for gaps in sea ice monitoring using Geostationary Ocean Color Imager-II (GOCI-II), an ocean satellite with short time resolution. This study tried to find out the possibility of utilizing sea ice monitoring by training a rule-based machine learning model based on learning data produced using high-resolution optical satellite images and performing detection on GOCI-II images. Learning materials were extracted from Liaodong Bay in the Bohai Sea from 2021 to 2022, and a Random Forest (RF) model using GOCI-II was constructed to compare qualitative and quantitative with sea ice areas obtained from existing normalized difference snow index (NDSI) based and high-resolution satellite images. Unlike NDSI index-based results, which underestimated the sea ice area, this study detected relatively detailed sea ice areas and confirmed that sea ice can be classified by type, enabling sea ice monitoring. If the accuracy of the detection model is improved through the construction of continuous learning materials and influencing factors on sea ice formation in the future, it is expected that it can be used in the field of sea ice monitoring in high-altitude ocean areas.

Studies on Xylooligosaccharide Analysis Method Standardization using HPLC-UVD in Health Functional Food (건강기능식품에서 HPLC-UVD를 이용한 자일로올리고당 시험법의 표준화 연구)

  • Se-Yun Lee;Hee-Sun Jeong;Kyu-Heon Kim;Mi-Young Lee;Jung-Ho Choi;Jeong-Sun Ahn;Kwang-Il Kwon;Hye-Young Lee
    • Journal of Food Hygiene and Safety
    • /
    • v.39 no.2
    • /
    • pp.72-82
    • /
    • 2024
  • This study aimed to develop a scientifically and systematically standardized xylooligosaccharide analytical method that can be applied to products with various formulations. The analysis method was conducted using HPLC with Cadenza C18 column, involving pre-column derivatization with 1-phenyl-3-methyl-5-pyrazoline (PMP) and UV detection at 254 nm. The xylooligosaccharide content was analyzed by converting xylooligosaccharide into xylose through acid hydrolysis. The pre-treated methods were compared and evaluated by varying sonication time, acid hydrolysis time, and concentration. Optimal equipment conditions were achieved with a mobile phase consisting of 20 mM potassium phosphate buffer (pH 6)-acetonitrile (78:22, v/v) through isocratic elution at a flow rate of 0.5 mL/min (254 nm). Furthermore, we validated the advanced standardized analysis method to support the suitability of the proposed analytical procedure such as specificity, linearity, detection limits (LOD), quantitative limits (LOQ), accuracy, and precision. The standardized analysis method is now in use for monitoring relevant health-functional food products available in the market. Our results have demonstrated that the standardized analysis method is expected to enhance the reliability of quality control for healthy functional foods containing xylooligosaccharide.

Diagnostic Value of CYFRA 21-1 Measurement in Fine-Needle Aspiration Washouts for Detection of Axillary Recurrence in Postoperative Breast Cancer Patients (유방암 수술 후 액와림프절 재발 진단에 있어서의 미세침세척액 CYFRA 21-1의 진단적 가치)

  • So Yeon Won;Eun-Kyung Kim;Hee Jung Moon;Jung Hyun Yoon;Vivian Youngjean Park;Min Jung Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.81 no.1
    • /
    • pp.147-156
    • /
    • 2020
  • Purpose The objective of this study was to evaluate the diagnostic value and threshold levels of cytokeratin fragment 21-1 (CYFRA 21-1) in fine-needle aspiration (FNA) washouts for detection of lymph node (LN) recurrence in postoperative breast cancer patients. Materials and Methods FNA cytological assessments and CYFRA 21-1 measurement in FNA washouts were performed for 64 axillary LNs suspicious for recurrence in 64 post-operative breast cancer patients. Final diagnosis was made on the basis of FNA cytology and follow-up data over at least 2 years. The concentration of CYFRA 21-1 was compared between recurrent LNs and benign LNs. Diagnostic performance and cut-off value were evaluated using a receiver operating characteristic curve. Results Regardless of the non-diagnostic results, the median concentration of CYFRA 21-1 in recurrent LNs was significantly higher than that in benign LNs (p < 0.001). The optimal diagnostic cut-off value was 1.6 ng/mL. The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of CYFRA 21-1 for LN recurrence were 90.9%, 100%, 100%, 98.1%, and 98.4%, respectively. Conclusion Measurement of CYFRA 21-1 concentration from ultrasound-guided FNA biopsy aspirates showed excellent diagnostic performance with a cut-off value of 1.6 ng/mL. These results indicate that measurement of CYFRA 21-1 concentration in FNA washouts is useful for the diagnosis of axillary LN recurrence in post-operative breast cancer patients.

A Study on the Availability of the On-Board Imager(OBI) and Cone-Beam CT(CBCT) in the Verification of Patient Set-up (온보드 영상장치(On-Board Imager) 및 콘빔CT(CBCT)를 이용한 환자 자세 검증의 유용성에 대한 연구)

  • Bak, Jino;Park, Sung-Ho;Park, Suk-Won
    • Radiation Oncology Journal
    • /
    • v.26 no.2
    • /
    • pp.118-125
    • /
    • 2008
  • Purpose: On-line image guided radiation therapy(on-line IGRT) and(kV X-ray images or cone beam CT images) were obtained by an on-board imager(OBI) and cone beam CT(CBCT), respectively. The images were then compared with simulated images to evaluate the patient's setup and correct for deviations. The setup deviations between the simulated images(kV or CBCT images), were computed from 2D/2D match or 3D/3D match programs, respectively. We then investigated the correctness of the calculated deviations. Materials and Methods: After the simulation and treatment planning for the RANDO phantom, the phantom was positioned on the treatment table. The phantom setup process was performed with side wall lasers which standardized treatment setup of the phantom with the simulated images, after the establishment of tolerance limits for laser line thickness. After a known translation or rotation angle was applied to the phantom, the kV X-ray images and CBCT images were obtained. Next, 2D/2D match and 3D/3D match with simulation CT images were taken. Lastly, the results were analyzed for accuracy of positional correction. Results: In the case of the 2D/2D match using kV X-ray and simulation images, a setup correction within $0.06^{\circ}$ for rotation only, 1.8 mm for translation only, and 2.1 mm and $0.3^{\circ}$ for both rotation and translation, respectively, was possible. As for the 3D/3D match using CBCT images, a correction within $0.03^{\circ}$ for rotation only, 0.16 mm for translation only, and 1.5 mm for translation and $0.0^{\circ}$ for rotation, respectively, was possible. Conclusion: The use of OBI or CBCT for the on-line IGRT provides the ability to exactly reproduce the simulated images in the setup of a patient in the treatment room. The fast detection and correction of a patient's positional error is possible in two dimensions via kV X-ray images from OBI and in three dimensions via CBCT with a higher accuracy. Consequently, the on-line IGRT represents a promising and reliable treatment procedure.

APPLICATION OF FUZZY SET THEORY IN SAFEGUARDS

  • Fattah, A.;Nishiwaki, Y.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1051-1054
    • /
    • 1993
  • The International Atomic Energy Agency's Statute in Article III.A.5 allows it“to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy”. Safeguards are essentially a technical means of verifying the fulfilment of political obligations undertaken by States and given a legal force in international agreements relating to the peaceful uses of nuclear energy. The main political objectives are: to assure the international community that States are complying with their non-proliferation and other peaceful undertakings; and to deter (a) the diversion of afeguarded nuclear materials to the production of nuclear explosives or for military purposes and (b) the misuse of safeguarded facilities with the aim of producing unsafeguarded nuclear material. It is clear that no international safeguards system can physically prevent diversion. The IAEA safeguards system is basically a verification measure designed to provide assurance in those cases in which diversion has not occurred. Verification is accomplished by two basic means: material accountancy and containment and surveillance measures. Nuclear material accountancy is the fundamental IAEA safeguards mechanism, while containment and surveillance serve as important complementary measures. Material accountancy refers to a collection of measurements and other determinations which enable the State and the Agency to maintain a current picture of the location and movement of nuclear material into and out of material balance areas, i. e. areas where all material entering or leaving is measurab e. A containment measure is one that is designed by taking advantage of structural characteristics, such as containers, tanks or pipes, etc. To establish the physical integrity of an area or item by preventing the undetected movement of nuclear material or equipment. Such measures involve the application of tamper-indicating or surveillance devices. Surveillance refers to both human and instrumental observation aimed at indicating the movement of nuclear material. The verification process consists of three over-lapping elements: (a) Provision by the State of information such as - design information describing nuclear installations; - accounting reports listing nuclear material inventories, receipts and shipments; - documents amplifying and clarifying reports, as applicable; - notification of international transfers of nuclear material. (b) Collection by the IAEA of information through inspection activities such as - verification of design information - examination of records and repo ts - measurement of nuclear material - examination of containment and surveillance measures - follow-up activities in case of unusual findings. (c) Evaluation of the information provided by the State and of that collected by inspectors to determine the completeness, accuracy and validity of the information provided by the State and to resolve any anomalies and discrepancies. To design an effective verification system, one must identify possible ways and means by which nuclear material could be diverted from peaceful uses, including means to conceal such diversions. These theoretical ways and means, which have become known as diversion strategies, are used as one of the basic inputs for the development of safeguards procedures, equipment and instrumentation. For analysis of implementation strategy purposes, it is assumed that non-compliance cannot be excluded a priori and that consequently there is a low but non-zero probability that a diversion could be attempted in all safeguards ituations. An important element of diversion strategies is the identification of various possible diversion paths; the amount, type and location of nuclear material involved, the physical route and conversion of the material that may take place, rate of removal and concealment methods, as appropriate. With regard to the physical route and conversion of nuclear material the following main categories may be considered: - unreported removal of nuclear material from an installation or during transit - unreported introduction of nuclear material into an installation - unreported transfer of nuclear material from one material balance area to another - unreported production of nuclear material, e. g. enrichment of uranium or production of plutonium - undeclared uses of the material within the installation. With respect to the amount of nuclear material that might be diverted in a given time (the diversion rate), the continuum between the following two limiting cases is cons dered: - one significant quantity or more in a short time, often known as abrupt diversion; and - one significant quantity or more per year, for example, by accumulation of smaller amounts each time to add up to a significant quantity over a period of one year, often called protracted diversion. Concealment methods may include: - restriction of access of inspectors - falsification of records, reports and other material balance areas - replacement of nuclear material, e. g. use of dummy objects - falsification of measurements or of their evaluation - interference with IAEA installed equipment.As a result of diversion and its concealment or other actions, anomalies will occur. All reasonable diversion routes, scenarios/strategies and concealment methods have to be taken into account in designing safeguards implementation strategies so as to provide sufficient opportunities for the IAEA to observe such anomalies. The safeguards approach for each facility will make a different use of these procedures, equipment and instrumentation according to the various diversion strategies which could be applicable to that facility and according to the detection and inspection goals which are applied. Postulated pathways sets of scenarios comprise those elements of diversion strategies which might be carried out at a facility or across a State's fuel cycle with declared or undeclared activities. All such factors, however, contain a degree of fuzziness that need a human judgment to make the ultimate conclusion that all material is being used for peaceful purposes. Safeguards has been traditionally based on verification of declared material and facilities using material accountancy as a fundamental measure. The strength of material accountancy is based on the fact that it allows to detect any diversion independent of the diversion route taken. Material accountancy detects a diversion after it actually happened and thus is powerless to physically prevent it and can only deter by the risk of early detection any contemplation by State authorities to carry out a diversion. Recently the IAEA has been faced with new challenges. To deal with these, various measures are being reconsidered to strengthen the safeguards system such as enhanced assessment of the completeness of the State's initial declaration of nuclear material and installations under its jurisdiction enhanced monitoring and analysis of open information and analysis of open information that may indicate inconsistencies with the State's safeguards obligations. Precise information vital for such enhanced assessments and analyses is normally not available or, if available, difficult and expensive collection of information would be necessary. Above all, realistic appraisal of truth needs sound human judgment.

  • PDF

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.