• Title/Summary/Keyword: mIoU

Search Result 65, Processing Time 0.025 seconds

Development of surface detection model for dried semi-finished product of Kimbukak using deep learning (딥러닝 기반 김부각 건조 반제품 표면 검출 모델 개발)

  • Tae Hyong Kim;Ki Hyun Kwon;Ah-Na Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.205-212
    • /
    • 2024
  • This study developed a deep learning model that distinguishes the front (with garnish) and the back (without garnish) surface of the dried semi-finished product (dried bukak) for screening operation before transfter the dried bukak to oil heater using robot's vacuum gripper. For deep learning model training and verification, RGB images for the front and back surfaces of 400 dry bukak that treated by data preproccessing were obtained. YOLO-v5 was used as a base structure of deep learning model. The area, surface information labeling, and data augmentation techniques were applied from the acquired image. Parameters including mAP, mIoU, accumulation, recall, decision, and F1-score were selected to evaluate the performance of the developed YOLO-v5 deep learning model-based surface detection model. The mAP and mIoU on the front surface were 0.98 and 0.96, respectively, and on the back surface, they were 1.00 and 0.95, respectively. The results of binary classification for the two front and back classes were average 98.5%, recall 98.3%, decision 98.6%, and F1-score 98.4%. As a result, the developed model can classify the surface information of the dried bukak using RGB images, and it can be used to develop a robot-automated system for the surface detection process of the dried bukak before deep frying.

Determination of Iodide in spent PWR fuels (경수로 사용 후 핵연료 내 요오드 정량)

  • Choi, Ke Chon;Lee, Chang Heon;Kim, Won Ho
    • Analytical Science and Technology
    • /
    • v.16 no.2
    • /
    • pp.110-116
    • /
    • 2003
  • A study has been done on the separation of iodide from spent pressurized water reactor (PWR) fuels and its quantitative determination using ion chromatography. Spent PWR fuels were dissolved with mixed acid of nitric and hydrochloric acids (80 : 20 molL%) which can oxidize iodide to iodate to prevent it from be vaporized. After reducing ${IO_3}^-$ ­to $I_2$ in 2.5 M $HNO_3$ with $NH_2OH{\cdot}HCl$, Iodine was selectively separated from actinides and all other fission products with carbontetrachloride and back-extracted with 0.1 M $NaHSO_3$. Recovered iodide was determined using the ion chromatograph of which the column was installed in a glove box for the analysis of radioactive materials. In practice, spent PWR fuel with 42,000~44,000 MWd/MtU was analyzed and its quantity was compared to that calculated by burnup code, ORIGEN2. The agreement was achieved with a deviation of -8.3~-0.5% from the ORIGEN 2 data, $324.5{\sim}343.6{\mu}g/g$.

Searching Spectrum Band of Crop Area Based on Deep Learning Using Hyper-spectral Image (초분광 영상을 이용한 딥러닝 기반의 작물 영역 스펙트럼 밴드 탐색)

  • Gwanghyeong Lee;Hyunjung Myung;Deepak Ghimire;Donghoon Kim;Sewoon Cho;Sunghwan Jeong;Bvouneiun Kim
    • Smart Media Journal
    • /
    • v.13 no.8
    • /
    • pp.39-48
    • /
    • 2024
  • Recently, various studies have emerged that utilize hyperspectral imaging for crop growth analysis and early disease diagnosis. However, the challenge of using numerous spectral bands or finding the optimal bands for crop area remains a difficult problem. In this paper, we propose a method of searching the optimized spectral band of crop area based on deep learning using the hyper-spectral image. The proposed method extracts RGB images within hyperspectral images to segment background and foreground area through a Vision Transformer-based Seformer. The segmented results project onto each band of gray-scale converted hyperspectral images. It determines the optimized spectral band of the crop area through the pixel comparison of the projected foreground and background area. The proposed method achieved foreground and background segmentation performance with an average accuracy of 98.47% and a mIoU of 96.48%. In addition, it was confirmed that the proposed method converges to the NIR regions closely related to the crop area compared to the mRMR method.

A Study on Countermeasures of Convergence for Big Data and Security Threats to Attack DRDoS in U-Healthcare Device (U-Healthcare 기기에서 DRDoS공격 보안위협과 Big Data를 융합한 대응방안 연구)

  • Hur, Yun-A;Lee, Keun-Ho
    • Journal of the Korea Convergence Society
    • /
    • v.6 no.4
    • /
    • pp.243-248
    • /
    • 2015
  • U-Healthcare is a convergence service with medical care and IT which enables to examine, manage and maintain the patient's health any time and any place. For communication conducted in U-Healthcare service, the transmission methods are used that patient's medical checkup analysis results or emergency data are transmitted to hospital server using wireless communication method. At this moment when the attacker who executes the malicious access makes DRDoS(Distributed Reflection DoS) attack to U-Healthcare devices or BS(Base Station), various damages occur that contextual information of urgent patients are not transmitted to hospital server. In order to deal with this problem, this study suggests DRDoS attack scenario and countermeasures against DRDoS and converges with Big Data which could process large amount of packets. When the attacker attacks U-Healthcare devices or BS(Base Station), DB is interconnected and the attack is prevented if it is coincident. This study analyzes the attack method that could occur in U-Healthcare devices or BS which are remote medical service and suggests countermeasures against the security threat using Big Data.

Validation of Semantic Segmentation Dataset for Autonomous Driving (승용자율주행을 위한 의미론적 분할 데이터셋 유효성 검증)

  • Gwak, Seoku;Na, Hoyong;Kim, Kyeong Su;Song, EunJi;Jeong, Seyoung;Lee, Kyewon;Jeong, Jihyun;Hwang, Sung-Ho
    • Journal of Drive and Control
    • /
    • v.19 no.4
    • /
    • pp.104-109
    • /
    • 2022
  • For autonomous driving research using AI, datasets collected from road environments play an important role. In other countries, various datasets such as CityScapes, A2D2, and BDD have already been released, but datasets suitable for the domestic road environment still need to be provided. This paper analyzed and verified the dataset reflecting the Korean driving environment. In order to verify the training dataset, the class imbalance was confirmed by comparing the number of pixels and instances of the dataset. A similar A2D2 dataset was trained with the same deep learning model, ConvNeXt, to compare and verify the constructed dataset. IoU was compared for the same class between two datasets with ConvNeXt and mIoU was compared. In this paper, it was confirmed that the collected dataset reflecting the driving environment of Korea is suitable for learning.

Application of Geo-Segment Anything Model (SAM) Scheme to Water Body Segmentation: An Experiment Study Using CAS500-1 Images (수체 추출을 위한 Geo-SAM 기법의 응용: 국토위성영상 적용 실험)

  • Hayoung Lee;Kwangseob Kim;Kiwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.343-350
    • /
    • 2024
  • Since the release of Meta's Segment Anything Model (SAM), a large-scale vision transformer generation model with rapid image segmentation capabilities, several studies have been conducted to apply this technology in various fields. In this study, we aimed to investigate the applicability of SAM for water bodies detection and extraction using the QGIS Geo-SAM plugin, which enables the use of SAM with satellite imagery. The experimental data consisted of Compact Advanced Satellite 500 (CAS500)-1 images. The results obtained by applying SAM to these data were compared with manually digitized water objects, Open Street Map (OSM), and water body data from the National Geographic Information Institute (NGII)-based hydrological digital map. The mean Intersection over Union (mIoU) calculated for all features extracted using SAM and these three-comparison data were 0.7490, 0.5905, and 0.4921, respectively. For features commonly appeared or extracted in all datasets, the results were 0.9189, 0.8779, and 0.7715, respectively. Based on analysis of the spatial consistency between SAM results and other comparison data, SAM showed limitations in detecting small-scale or poorly defined streams but provided meaningful segmentation results for water body classification.

A Comparative Performance Analysis of Segmentation Models for Lumbar Key-points Extraction (요추 특징점 추출을 위한 영역 분할 모델의 성능 비교 분석)

  • Seunghee Yoo;Minho Choi ;Jun-Su Jang
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.5
    • /
    • pp.354-361
    • /
    • 2023
  • Most of spinal diseases are diagnosed based on the subjective judgment of a specialist, so numerous studies have been conducted to find objectivity by automating the diagnosis process using deep learning. In this paper, we propose a method that combines segmentation and feature extraction, which are frequently used techniques for diagnosing spinal diseases. Four models, U-Net, U-Net++, DeepLabv3+, and M-Net were trained and compared using 1000 X-ray images, and key-points were derived using Douglas-Peucker algorithms. For evaluation, Dice Similarity Coefficient(DSC), Intersection over Union(IoU), precision, recall, and area under precision-recall curve evaluation metrics were used and U-Net++ showed the best performance in all metrics with an average DSC of 0.9724. For the average Euclidean distance between estimated key-points and ground truth, U-Net was the best, followed by U-Net++. However the difference in average distance was about 0.1 pixels, which is not significant. The results suggest that it is possible to extract key-points based on segmentation and that it can be used to accurately diagnose various spinal diseases, including spondylolisthesis, with consistent criteria.

Attention Aware Residual U-Net for Biometrics Segmentation (생체 인식 인식 시스템을 위한 주의 인식 잔차 분할)

  • Htet, Aung Si Min;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.300-302
    • /
    • 2022
  • Palm vein identification has attracted attention due to its distinct characteristics and excellent recognition accuracy. However, many contactless palm vein identification systems suffer from the issue of having low-quality palm images, resulting in degradation of recognition accuracy. This paper proposes the use of U-Net architecture to correctly segment the vascular blood vessel from palm images. Attention gate mechanism and residual block are also utilized to effectively learn the crucial features of a specific segmentation task. The experiments were conducted on CASIA dataset. Hessian-based Jerman filtering method is applied to label the palm vein patterns from the original images, then the network is trained to segment the palm vein features from the background noise. The proposed method has obtained 96.24 IoU coefficient and 98.09 dice coefficient.

Waterbody Detection for the Reservoirs in South Korea Using Swin Transformer and Sentinel-1 Images (Swin Transformer와 Sentinel-1 영상을 이용한 우리나라 저수지의 수체 탐지)

  • Soyeon Choi;Youjeong Youn;Jonggu Kang;Seoyeon Kim;Yemin Jeong;Yungyo Im;Youngmin Seo;Wanyub Kim;Minha Choi;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.949-965
    • /
    • 2023
  • In this study, we propose a method to monitor the surface area of agricultural reservoirs in South Korea using Sentinel-1 synthetic aperture radar images and the deep learning model, Swin Transformer. Utilizing the Google Earth Engine platform, datasets from 2017 to 2021 were constructed for seven agricultural reservoirs, categorized into 700 K-ton, 900 K-ton, and 1.5 M-ton capacities. For four of the reservoirs, a total of 1,283 images were used for model training through shuffling and 5-fold cross-validation techniques. Upon evaluation, the Swin Transformer Large model, configured with a window size of 12, demonstrated superior semantic segmentation performance, showing an average accuracy of 99.54% and a mean intersection over union (mIoU) of 95.15% for all folds. When the best-performing model was applied to the datasets of the remaining three reservoirsfor validation, it achieved an accuracy of over 99% and mIoU of over 94% for all reservoirs. These results indicate that the Swin Transformer model can effectively monitor the surface area of agricultural reservoirs in South Korea.

Classification of Industrial Parks and Quarries Using U-Net from KOMPSAT-3/3A Imagery (KOMPSAT-3/3A 영상으로부터 U-Net을 이용한 산업단지와 채석장 분류)

  • Che-Won Park;Hyung-Sup Jung;Won-Jin Lee;Kwang-Jae Lee;Kwan-Young Oh;Jae-Young Chang;Moung-Jin Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1679-1692
    • /
    • 2023
  • South Korea is a country that emits a large amount of pollutants as a result of population growth and industrial development and is also severely affected by transboundary air pollution due to its geographical location. As pollutants from both domestic and foreign sources contribute to air pollution in Korea, the location of air pollutant emission sources is crucial for understanding the movement and distribution of pollutants in the atmosphere and establishing national-level air pollution management and response strategies. Based on this background, this study aims to effectively acquire spatial information on domestic and international air pollutant emission sources, which is essential for analyzing air pollution status, by utilizing high-resolution optical satellite images and deep learning-based image segmentation models. In particular, industrial parks and quarries, which have been evaluated as contributing significantly to transboundary air pollution, were selected as the main research subjects, and images of these areas from multi-purpose satellites 3 and 3A were collected, preprocessed, and converted into input and label data for model training. As a result of training the U-Net model using this data, the overall accuracy of 0.8484 and mean Intersection over Union (mIoU) of 0.6490 were achieved, and the predicted maps showed significant results in extracting object boundaries more accurately than the label data created by course annotations.