• Title/Summary/Keyword: Performance Accuracy

Search Result 8,252, Processing Time 0.041 seconds

Determination of Heavy Metal Concentration in Herbal Medicines by GF-AAS and Automated Mercury Analyzer

  • Kim, Sang-A;Kim, Young-Jun
    • Journal of Food Hygiene and Safety
    • /
    • v.36 no.4
    • /
    • pp.281-288
    • /
    • 2021
  • This study was conducted to analyze and compare the concentrations of heavy metals in 430 different products of 20 types of herbal medicines available in the domestic market in Korea by Graphite Furnace-Atomic Absorption Spectrometry (GF-AAS) and automated mercury analyzer. The accuracy for lead (Pb), arsenic (As), cadmium (Cd), and mercury (Hg) was in the range 92.67-102.56%, and the precision was 0.21-6.00 relative standard deviation (RSD%), which was in compliance with the Codex acceptable range. Furthermore, the Food Analysis Performance Assessment Scheme (FAPAS) quality control (QC) material showed a recovery range of 96.7-102.0% and 0.33-4.93 RSD%. The average contents (㎍/kg) of Pb, As, Cd, and Hg in herbal medicines were 254.9 (not detected (N.D.)-2,515.2), 171.0 (N.D.-2,465.2), 99.2 (N.D.-797.1), and 6.0 (N.D.-83.6), respectively. Based on the quantitative analysis results, the heavy metal contents of 20 types of herbal medicines distributed in Korea are within the acceptable range according to the standards issued by the Ministry of Food and Drug Safety (MFDS). By using the manufacturer of herbal products as the standard for QC, the Pb, As, Cd, and Hg contents were investigated in the packaging process just before distribution to determine the actual conditions of residual heavy metals in herbal medicines. Thus, these result may contribute to monitoring the QC of herbal medicines distributed in Korea and could provide basic data for supplying safe herbal medicines to the public.

Risk-Scoring System for Prediction of Non-Curative Endoscopic Submucosal Dissection Requiring Additional Gastrectomy in Patients with Early Gastric Cancer

  • Kim, Tae-Se;Min, Byung-Hoon;Kim, Kyoung-Mee;Yoo, Heejin;Kim, Kyunga;Min, Yang Won;Lee, Hyuk;Rhee, Poong-Lyul;Kim, Jae J.;Lee, Jun Haeng
    • Journal of Gastric Cancer
    • /
    • v.21 no.4
    • /
    • pp.368-378
    • /
    • 2021
  • Purpose: When patients with early gastric cancer (EGC) undergo non-curative endoscopic submucosal dissection requiring gastrectomy (NC-ESD-RG), additional medical resources and expenses are required for surgery. To reduce this burden, predictive model for NC-ESD-RG is required. Materials and Methods: Data from 2,997 patients undergoing ESD for 3,127 forceps biopsy-proven differentiated-type EGCs (2,345 and 782 in training and validation sets, respectively) were reviewed. Using the training set, the logistic stepwise regression analysis determined the independent predictors of NC-ESD-RG (NC-ESD other than cases with lateral resection margin involvement or piecemeal resection as the only non-curative factor). Using these predictors, a risk-scoring system for predicting NC-ESD-RG was developed. Performance of the predictive model was examined internally with the validation set. Results: Rate of NC-ESD-RG was 17.3%. Independent pre-ESD predictors for NC-ESD-RG included moderately differentiated or papillary EGC, large tumor size, proximal tumor location, lesion at greater curvature, elevated or depressed morphology, and presence of ulcers. A risk-score was assigned to each predictor of NC-ESD-RG. The area under the receiver operating characteristic curve for predicting NC-ESD-RG was 0.672 in both training and validation sets. A risk-score of 5 points was the optimal cut-off value for predicting NC-ESD-RG, and the overall accuracy was 72.7%. As the total risk score increased, the predicted risk for NC-ESD-RG increased from 3.8% to 72.6%. Conclusions: We developed and validated a risk-scoring system for predicting NC-ESD-RG based on pre-ESD variables. Our risk-scoring system can facilitate informed consent and decision-making for preoperative treatment selection between ESD and surgery in patients with EGC.

Determination of Sodium Alginate in Processed Food Products Distributed in Korea

  • Yang, Hyo-Jin;Seo, Eunbin;Yun, Choong-In;Kim, Young-Jun
    • Journal of Food Hygiene and Safety
    • /
    • v.36 no.6
    • /
    • pp.474-480
    • /
    • 2021
  • Sodium alginate is the sodium salt of alginic acid, commonly used as a food additive for stabilizing, thickening, and emulsifying properties. A relatively simple and universal analysis method is used to study sodium alginate due to the complex pretreatment process and extended analysis time required during the quantitative method. As for the equipment, HPLC-UVD and Unison US-Phenyl column were used for analysis. For the pretreatment condition, a shaking apparatus was used for extraction at 150 rpm for 180 minutes at room temperature. The calibration curve made from the standard sodium alginate solution in 5 concentration ranges showed that the linearity (R2) is 0.9999 on average. LOD and LOQ showed 3.96 mg/kg and 12.0 mg/kg, respectively. Furthermore, the average intraday and inter-day accuracy (%) and precision (RSD%) were 98.47-103.74% and 1.69-3.08% for seaweed jelly noodle samples and 99.95-105.76% and 0.59-3.63% for sherbet samples, respectively. The relative uncertainty value was appropriate for the CODEX standard with 1.5-7.9%. To evaluate the applicability of the method developed in this study, the sodium alginate concentrations of 103 products were quantified. The result showed that the detection rate is highest from starch vermicelli and instant fried noodles to sugar processed products.

A Deep Learning Method for Cost-Effective Feed Weight Prediction of Automatic Feeder for Companion Animals (반려동물용 자동 사료급식기의 비용효율적 사료 중량 예측을 위한 딥러닝 방법)

  • Kim, Hoejung;Jeon, Yejin;Yi, Seunghyun;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.263-278
    • /
    • 2022
  • With the recent advent of IoT technology, automatic pet feeders are being distributed so that owners can feed their companion animals while they are out. However, due to behaviors of pets, the method of measuring weight, which is important in automatic feeding, can be easily damaged and broken when using the scale. The 3D camera method has disadvantages due to its cost, and the 2D camera method has relatively poor accuracy when compared to 3D camera method. Hence, the purpose of this study is to propose a deep learning approach that can accurately estimate weight while simply using a 2D camera. For this, various convolutional neural networks were used, and among them, the ResNet101-based model showed the best performance: an average absolute error of 3.06 grams and an average absolute ratio error of 3.40%, which could be used commercially in terms of technical and financial viability. The result of this study can be useful for the practitioners to predict the weight of a standardized object such as feed only through an easy 2D image.

Anomaly Detection Methodology Based on Multimodal Deep Learning (멀티모달 딥 러닝 기반 이상 상황 탐지 방법론)

  • Lee, DongHoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.101-125
    • /
    • 2022
  • Recently, with the development of computing technology and the improvement of the cloud environment, deep learning technology has developed, and attempts to apply deep learning to various fields are increasing. A typical example is anomaly detection, which is a technique for identifying values or patterns that deviate from normal data. Among the representative types of anomaly detection, it is very difficult to detect a contextual anomaly that requires understanding of the overall situation. In general, detection of anomalies in image data is performed using a pre-trained model trained on large data. However, since this pre-trained model was created by focusing on object classification of images, there is a limit to be applied to anomaly detection that needs to understand complex situations created by various objects. Therefore, in this study, we newly propose a two-step pre-trained model for detecting abnormal situation. Our methodology performs additional learning from image captioning to understand not only mere objects but also the complicated situation created by them. Specifically, the proposed methodology transfers knowledge of the pre-trained model that has learned object classification with ImageNet data to the image captioning model, and uses the caption that describes the situation represented by the image. Afterwards, the weight obtained by learning the situational characteristics through images and captions is extracted and fine-tuning is performed to generate an anomaly detection model. To evaluate the performance of the proposed methodology, an anomaly detection experiment was performed on 400 situational images and the experimental results showed that the proposed methodology was superior in terms of anomaly detection accuracy and F1-score compared to the existing traditional pre-trained model.

A Study on Biomass Estimation Technique of Invertebrate Grazers Using Multi-object Tracking Model Based on Deep Learning (딥러닝 기반 다중 객체 추적 모델을 활용한 조식성 무척추동물 현존량 추정 기법 연구)

  • Bak, Suho;Kim, Heung-Min;Lee, Heeone;Han, Jeong-Ik;Kim, Tak-Young;Lim, Jae-Young;Jang, Seon Woong
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.3
    • /
    • pp.237-250
    • /
    • 2022
  • In this study, we propose a method to estimate the biomass of invertebrate grazers from the videos with underwater drones by using a multi-object tracking model based on deep learning. In order to detect invertebrate grazers by classes, we used YOLOv5 (You Only Look Once version 5). For biomass estimation we used DeepSORT (Deep Simple Online and real-time tracking). The performance of each model was evaluated on a workstation with a GPU accelerator. YOLOv5 averaged 0.9 or more mean Average Precision (mAP), and we confirmed it shows about 59 fps at 4 k resolution when using YOLOv5s model and DeepSORT algorithm. Applying the proposed method in the field, there was a tendency to be overestimated by about 28%, but it was confirmed that the level of error was low compared to the biomass estimation using object detection model only. A follow-up study is needed to improve the accuracy for the cases where frame images go out of focus continuously or underwater drones turn rapidly. However,should these issues be improved, it can be utilized in the production of decision support data in the field of invertebrate grazers control and monitoring in the future.

Development of Fender Segmentation System for Port Structures using Vision Sensor and Deep Learning (비전센서 및 딥러닝을 이용한 항만구조물 방충설비 세분화 시스템 개발)

  • Min, Jiyoung;Yu, Byeongjun;Kim, Jonghyeok;Jeon, Haemin
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.26 no.2
    • /
    • pp.28-36
    • /
    • 2022
  • As port structures are exposed to various extreme external loads such as wind (typhoons), sea waves, or collision with ships; it is important to evaluate the structural safety periodically. To monitor the port structure, especially the rubber fender, a fender segmentation system using a vision sensor and deep learning method has been proposed in this study. For fender segmentation, a new deep learning network that improves the encoder-decoder framework with the receptive field block convolution module inspired by the eccentric function of the human visual system into the DenseNet format has been proposed. In order to train the network, various fender images such as BP, V, cell, cylindrical, and tire-types have been collected, and the images are augmented by applying four augmentation methods such as elastic distortion, horizontal flip, color jitter, and affine transforms. The proposed algorithm has been trained and verified with the collected various types of fender images, and the performance results showed that the system precisely segmented in real time with high IoU rate (84%) and F1 score (90%) in comparison with the conventional segmentation model, VGG16 with U-net. The trained network has been applied to the real images taken at one port in Republic of Korea, and found that the fenders are segmented with high accuracy even with a small dataset.

A LiDAR-based Visual Sensor System for Automatic Mooring of a Ship (선박 자동계류를 위한 LiDAR기반 시각센서 시스템 개발)

  • Kim, Jin-Man;Nam, Taek-Kun;Kim, Heon-Hui
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.6
    • /
    • pp.1036-1043
    • /
    • 2022
  • This paper discusses about the development of a visual sensor that can be installed in an automatic mooring device to detect the berthing condition of a vessel. Despite controlling the ship's speed and confirming its location to prevent accidents while berthing a vessel, ship collision occurs at the pier every year, causing great economic and environmental damage. Therefore, it is important to develop a visual system that can quickly obtain the information on the speed and location of the vessel to ensure safety of the berthing vessel. In this study, a visual sensor was developed to observe a ship through an image while berthing, and to properly check the ship's status according to the surrounding environment. To obtain the adequacy of the visual sensor to be developed, the sensor characteristics were analyzed in terms of information provided from the existing sensors, that is, detection range, real-timeness, accuracy, and precision. Based on these analysis data, we developed a 3D visual module that can acquire information on objects in real time by conducting conceptual designs of LiDAR (Light Detection And Ranging) type 3D visual system, driving mechanism, and position and force controller for motion tilting system. Finally, performance evaluation of the control system and scan speed test were executed, and the effectiveness of the developed system was confirmed through experiments.

Detection of Marine Oil Spills from PlanetScope Images Using DeepLabV3+ Model (DeepLabV3+ 모델을 이용한 PlanetScope 영상의 해상 유출유 탐지)

  • Kang, Jonggu;Youn, Youjeong;Kim, Geunah;Park, Ganghyun;Choi, Soyeon;Yang, Chan-Su;Yi, Jonghyuk;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1623-1631
    • /
    • 2022
  • Since oil spills can be a significant threat to the marine ecosystem, it is necessary to obtain information on the current contamination status quickly to minimize the damage. Satellite-based detection of marine oil spills has the advantage of spatiotemporal coverage because it can monitor a wide area compared to aircraft. Due to the recent development of computer vision and deep learning, marine oil spill detection can also be facilitated by deep learning. Unlike the existing studies based on Synthetic Aperture Radar (SAR) images, we conducted a deep learning modeling using PlanetScope optical satellite images. The blind test of the DeepLabV3+ model for oil spill detection showed the performance statistics with an accuracy of 0.885, a precision of 0.888, a recall of 0.886, an F1-score of 0.883, and a Mean Intersection over Union (mIOU) of 0.793.

Detection of Urban Trees Using YOLOv5 from Aerial Images (항공영상으로부터 YOLOv5를 이용한 도심수목 탐지)

  • Park, Che-Won;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1633-1641
    • /
    • 2022
  • Urban population concentration and indiscriminate development are causing various environmental problems such as air pollution and heat island phenomena, and causing human resources to deteriorate the damage caused by natural disasters. Urban trees have been proposed as a solution to these urban problems, and actually play an important role, such as providing environmental improvement functions. Accordingly, quantitative measurement and analysis of individual trees in urban trees are required to understand the effect of trees on the urban environment. However, the complexity and diversity of urban trees have a problem of lowering the accuracy of single tree detection. Therefore, we conducted a study to effectively detect trees in Dongjak-gu using high-resolution aerial images that enable effective detection of tree objects and You Only Look Once Version 5 (YOLOv5), which showed excellent performance in object detection. Labeling guidelines for the construction of tree AI learning datasets were generated, and box annotation was performed on Dongjak-gu trees based on this. We tested various scale YOLOv5 models from the constructed dataset and adopted the optimal model to perform more efficient urban tree detection, resulting in significant results of mean Average Precision (mAP) 0.663.