• Title/Summary/Keyword: intersection union test

Search Result 20, Processing Time 0.021 seconds

Research on damage detection and assessment of civil engineering structures based on DeepLabV3+ deep learning model

  • Chengyan Song
    • Structural Engineering and Mechanics
    • /
    • v.91 no.5
    • /
    • pp.443-457
    • /
    • 2024
  • At present, the traditional concrete surface inspection methods based on artificial vision have the problems of high cost and insecurity, while the computer vision methods rely on artificial selection features in the case of sensitive environmental changes and difficult promotion. In order to solve these problems, this paper introduces deep learning technology in the field of computer vision to achieve automatic feature extraction of structural damage, with excellent detection speed and strong generalization ability. The main contents of this study are as follows: (1) A method based on DeepLabV3+ convolutional neural network model is proposed for surface detection of post-earthquake structural damage, including surface damage such as concrete cracks, spaling and exposed steel bars. The key semantic information is extracted by different backbone networks, and the data sets containing various surface damage are trained, tested and evaluated. The intersection ratios of 54.4%, 44.2%, and 89.9% in the test set demonstrate the network's capability to accurately identify different types of structural surface damages in pixel-level segmentation, highlighting its effectiveness in varied testing scenarios. (2) A semantic segmentation model based on DeepLabV3+ convolutional neural network is proposed for the detection and evaluation of post-earthquake structural components. Using a dataset that includes building structural components and their damage degrees for training, testing, and evaluation, semantic segmentation detection accuracies were recorded at 98.5% and 56.9%. To provide a comprehensive assessment that considers both false positives and false negatives, the Mean Intersection over Union (Mean IoU) was employed as the primary evaluation metric. This choice ensures that the network's performance in detecting and evaluating pixel-level damage in post-earthquake structural components is evaluated uniformly across all experiments. By incorporating deep learning technology, this study not only offers an innovative solution for accurately identifying post-earthquake damage in civil engineering structures but also contributes significantly to empirical research in automated detection and evaluation within the field of structural health monitoring.

U-Net Cloud Detection for the SPARCS Cloud Dataset from Landsat 8 Images (Landsat 8 기반 SPARCS 데이터셋을 이용한 U-Net 구름탐지)

  • Kang, Jonggu;Kim, Geunah;Jeong, Yemin;Kim, Seoyeon;Youn, Youjeong;Cho, Soobin;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1149-1161
    • /
    • 2021
  • With a trend of the utilization of computer vision for satellite images, cloud detection using deep learning also attracts attention recently. In this study, we conducted a U-Net cloud detection modeling using SPARCS (Spatial Procedures for Automated Removal of Cloud and Shadow) Cloud Dataset with the image data augmentation and carried out 10-fold cross-validation for an objective assessment of the model. Asthe result of the blind test for 1800 datasets with 512 by 512 pixels, relatively high performance with the accuracy of 0.821, the precision of 0.847, the recall of 0.821, the F1-score of 0.831, and the IoU (Intersection over Union) of 0.723. Although 14.5% of actual cloud shadows were misclassified as land, and 19.7% of actual clouds were misidentified as land, this can be overcome by increasing the quality and quantity of label datasets. Moreover, a state-of-the-art DeepLab V3+ model and the NAS (Neural Architecture Search) optimization technique can help the cloud detection for CAS500 (Compact Advanced Satellite 500) in South Korea.

Detection of Marine Oil Spills from PlanetScope Images Using DeepLabV3+ Model (DeepLabV3+ 모델을 이용한 PlanetScope 영상의 해상 유출유 탐지)

  • Kang, Jonggu;Youn, Youjeong;Kim, Geunah;Park, Ganghyun;Choi, Soyeon;Yang, Chan-Su;Yi, Jonghyuk;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1623-1631
    • /
    • 2022
  • Since oil spills can be a significant threat to the marine ecosystem, it is necessary to obtain information on the current contamination status quickly to minimize the damage. Satellite-based detection of marine oil spills has the advantage of spatiotemporal coverage because it can monitor a wide area compared to aircraft. Due to the recent development of computer vision and deep learning, marine oil spill detection can also be facilitated by deep learning. Unlike the existing studies based on Synthetic Aperture Radar (SAR) images, we conducted a deep learning modeling using PlanetScope optical satellite images. The blind test of the DeepLabV3+ model for oil spill detection showed the performance statistics with an accuracy of 0.885, a precision of 0.888, a recall of 0.886, an F1-score of 0.883, and a Mean Intersection over Union (mIOU) of 0.793.

Development and Validation of AI Image Segmentation Model for CT Image-Based Sarcopenia Diagnosis (CT 영상 기반 근감소증 진단을 위한 AI 영상분할 모델 개발 및 검증)

  • Lee Chung-Sub;Lim Dong-Wook;Noh Si-Hyeong;Kim Tae-Hoon;Ko Yousun;Kim Kyung Won;Jeong Chang-Won
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.3
    • /
    • pp.119-126
    • /
    • 2023
  • Sarcopenia is not well known enough to be classified as a disease in 2021 in Korea, but it is recognized as a social problem in developed countries that have entered an aging society. The diagnosis of sarcopenia follows the international standard guidelines presented by the European Working Group for Sarcopenia in Older People (EWGSOP) and the d Asian Working Group for Sarcopenia (AWGS). Recently, it is recommended to evaluate muscle function by using physical performance evaluation, walking speed measurement, and standing test in addition to absolute muscle mass as a diagnostic method. As a representative method for measuring muscle mass, the body composition analysis method using DEXA has been formally implemented in clinical practice. In addition, various studies for measuring muscle mass using abdominal images of MRI or CT are being actively conducted. In this paper, we develop an AI image segmentation model based on abdominal images of CT with a relatively short imaging time for the diagnosis of sarcopenia and describe the multicenter validation. We developed an artificial intelligence model using U-Net that can automatically segment muscle, subcutaneous fat, and visceral fat by selecting the L3 region from the CT image. Also, to evaluate the performance of the model, internal verification was performed by calculating the intersection over union (IOU) of the partitioned area, and the results of external verification using data from other hospitals are shown. Based on the verification results, we tried to review and supplement the problems and solutions.

Crack segmentation in high-resolution images using cascaded deep convolutional neural networks and Bayesian data fusion

  • Tang, Wen;Wu, Rih-Teng;Jahanshahi, Mohammad R.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.221-235
    • /
    • 2022
  • Manual inspection of steel box girders on long span bridges is time-consuming and labor-intensive. The quality of inspection relies on the subjective judgements of the inspectors. This study proposes an automated approach to detect and segment cracks in high-resolution images. An end-to-end cascaded framework is proposed to first detect the existence of cracks using a deep convolutional neural network (CNN) and then segment the crack using a modified U-Net encoder-decoder architecture. A Naïve Bayes data fusion scheme is proposed to reduce the false positives and false negatives effectively. To generate the binary crack mask, first, the original images are divided into 448 × 448 overlapping image patches where these image patches are classified as cracks versus non-cracks using a deep CNN. Next, a modified U-Net is trained from scratch using only the crack patches for segmentation. A customized loss function that consists of binary cross entropy loss and the Dice loss is introduced to enhance the segmentation performance. Additionally, a Naïve Bayes fusion strategy is employed to integrate the crack score maps from different overlapping crack patches and to decide whether a pixel is crack or not. Comprehensive experiments have demonstrated that the proposed approach achieves an 81.71% mean intersection over union (mIoU) score across 5 different training/test splits, which is 7.29% higher than the baseline reference implemented with the original U-Net.

A Study of AI-based Monitoring Techniques for Land-based Debris in Stream (AI기반 하천 부유쓰레기 모니터링 기술 연구)

  • Kyungsu Lee;Haein Yoon;Jonghwa Won;Sang Hwa Jung
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.137-137
    • /
    • 2023
  • 해양쓰레기는 해안의 심미적 가치 저하뿐만 아니라 생태계 파괴, 유령 어업에 따른 수산업 피해 등의 사회적·환경적 문제를 발생시키며, 그중 70% 이상은 육상 기인으로 플라스틱 및 기타 쓰레기가 주를 이루는 해외와 달리 국내의 경우 다량의 초목류를 포함하고 있다. 다양한 부유쓰레기에 대한 기존의 해양쓰레기량 추정의 한계와 하천·하구 쓰레기 수거의 효율화를 위해 해양으로 유입되는 부유쓰레기 방지를 위한 실효성 있는 대책 수립이 필요한 실정이다. 본 연구는 해양 유입 전 하천의 차단시설에 차집된 부유쓰레기의 수거 효율화 및 지속가능한 해양쓰레기 데이터 구축을 위해 AI기반의 기술을 통해 부유쓰레기 성상 분석 기법(Object Detection)과 차집량 분석 기법(Semantic Segmentation)을 활용하였다. 실제와 유사한 데이터 수집을 위해 다양한 하천 환경(정수조, 소하천, 급경사수로)에 대해 탁도(녹조, 유사), 광량, 쓰레기형상, 초목류 함량, 날씨(소하천), 유속(급경사수로) 등의 실험조건에 대하여 해양쓰레기 분류 기준 및 통계를 바탕으로 부유쓰레기 종류 선정하여 학습을 위한 데이터를 수집하였다. 학습 목적에 따라 구분하여 라벨링(Bounding box, Polygon)을 수행하고, 각 분석 기법별 전이학습을 통해 Phase 1(정수조), Phase 2(소하천), Phase 3(급경사수로) 순서로 모델을 고도화하였다. 성상 분석을 위해 YOLO v4를 활용하여 Train, Test DataSet(9:1)을 구성하고 학습 및 평가는 Iteration마다의 mAP, loss 값을 통해 비교하였으며, 학습 Phase에 따라 모델 고도화로 Test Set의 mAP 값이 성상별로 높아짐을 확인하였으며, 차집량 분석을 위해 Unet을 활용하여 Train, Test, Validation DataSet(8.5:1:0.5)을 구성하고 epoch별 IoU(intersection over Union), F1-score, loss 값을 비교하여 정성적, 정량적 평가 모두 Phase 3에서 가장 높은 성능을 확인하였다. 향후 하천 환경에서의 다양한 영양인자별 분석을 통해 주요 영향인자 도출 및 Hyper Parameter 최적화를 통한 모델 고도화로 인해 활용성이 높아질 것으로 판단된다.

  • PDF

Deep learning-based apical lesion segmentation from panoramic radiographs

  • Il-Seok, Song;Hak-Kyun, Shin;Ju-Hee, Kang;Jo-Eun, Kim;Kyung-Hoe, Huh;Won-Jin, Yi;Sam-Sun, Lee;Min-Suk, Heo
    • Imaging Science in Dentistry
    • /
    • v.52 no.4
    • /
    • pp.351-357
    • /
    • 2022
  • Purpose: Convolutional neural networks (CNNs) have rapidly emerged as one of the most promising artificial intelligence methods in the field of medical and dental research. CNNs can provide an effective diagnostic methodology allowing for the detection of early-staged diseases. Therefore, this study aimed to evaluate the performance of a deep CNN algorithm for apical lesion segmentation from panoramic radiographs. Materials and Methods: A total of 1000 panoramic images showing apical lesions were separated into training (n=800, 80%), validation (n=100, 10%), and test (n=100, 10%) datasets. The performance of identifying apical lesions was evaluated by calculating the precision, recall, and F1-score. Results: In the test group of 180 apical lesions, 147 lesions were segmented from panoramic radiographs with an intersection over union (IoU) threshold of 0.3. The F1-score values, as a measure of performance, were 0.828, 0.815, and 0.742, respectively, with IoU thresholds of 0.3, 0.4, and 0.5. Conclusion: This study showed the potential utility of a deep learning-guided approach for the segmentation of apical lesions. The deep CNN algorithm using U-Net demonstrated considerably high performance in detecting apical lesions.

Synthetic data augmentation for pixel-wise steel fatigue crack identification using fully convolutional networks

  • Zhai, Guanghao;Narazaki, Yasutaka;Wang, Shuo;Shajihan, Shaik Althaf V.;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.237-250
    • /
    • 2022
  • Structural health monitoring (SHM) plays an important role in ensuring the safety and functionality of critical civil infrastructure. In recent years, numerous researchers have conducted studies to develop computer vision and machine learning techniques for SHM purposes, offering the potential to reduce the laborious nature and improve the effectiveness of field inspections. However, high-quality vision data from various types of damaged structures is relatively difficult to obtain, because of the rare occurrence of damaged structures. The lack of data is particularly acute for fatigue crack in steel bridge girder. As a result, the lack of data for training purposes is one of the main issues that hinders wider application of these powerful techniques for SHM. To address this problem, the use of synthetic data is proposed in this article to augment real-world datasets used for training neural networks that can identify fatigue cracks in steel structures. First, random textures representing the surface of steel structures with fatigue cracks are created and mapped onto a 3D graphics model. Subsequently, this model is used to generate synthetic images for various lighting conditions and camera angles. A fully convolutional network is then trained for two cases: (1) using only real-word data, and (2) using both synthetic and real-word data. By employing synthetic data augmentation in the training process, the crack identification performance of the neural network for the test dataset is seen to improve from 35% to 40% and 49% to 62% for intersection over union (IoU) and precision, respectively, demonstrating the efficacy of the proposed approach.

Real-time semantic segmentation of gastric intestinal metaplasia using a deep learning approach

  • Vitchaya Siripoppohn;Rapat Pittayanon;Kasenee Tiankanon;Natee Faknak;Anapat Sanpavat;Naruemon Klaikaew;Peerapon Vateekul;Rungsun Rerknimitr
    • Clinical Endoscopy
    • /
    • v.55 no.3
    • /
    • pp.390-400
    • /
    • 2022
  • Background/Aims: Previous artificial intelligence (AI) models attempting to segment gastric intestinal metaplasia (GIM) areas have failed to be deployed in real-time endoscopy due to their slow inference speeds. Here, we propose a new GIM segmentation AI model with inference speeds faster than 25 frames per second that maintains a high level of accuracy. Methods: Investigators from Chulalongkorn University obtained 802 histological-proven GIM images for AI model training. Four strategies were proposed to improve the model accuracy. First, transfer learning was employed to the public colon datasets. Second, an image preprocessing technique contrast-limited adaptive histogram equalization was employed to produce clearer GIM areas. Third, data augmentation was applied for a more robust model. Lastly, the bilateral segmentation network model was applied to segment GIM areas in real time. The results were analyzed using different validity values. Results: From the internal test, our AI model achieved an inference speed of 31.53 frames per second. GIM detection showed sensitivity, specificity, positive predictive, negative predictive, accuracy, and mean intersection over union in GIM segmentation values of 93%, 80%, 82%, 92%, 87%, and 57%, respectively. Conclusions: The bilateral segmentation network combined with transfer learning, contrast-limited adaptive histogram equalization, and data augmentation can provide high sensitivity and good accuracy for GIM detection and segmentation.

An Artificial Intelligence Approach to Waterbody Detection of the Agricultural Reservoirs in South Korea Using Sentinel-1 SAR Images (Sentinel-1 SAR 영상과 AI 기법을 이용한 국내 중소규모 농업저수지의 수표면적 산출)

  • Choi, Soyeon;Youn, Youjeong;Kang, Jonggu;Park, Ganghyun;Kim, Geunah;Lee, Seulchan;Choi, Minha;Jeong, Hagyu;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.925-938
    • /
    • 2022
  • Agricultural reservoirs are an important water resource nationwide and vulnerable to abnormal climate effects such as drought caused by climate change. Therefore, it is required enhanced management for appropriate operation. Although water-level tracking is necessary through continuous monitoring, it is challenging to measure and observe on-site due to practical problems. This study presents an objective comparison between multiple AI models for water-body extraction using radar images that have the advantages of wide coverage, and frequent revisit time. The proposed methods in this study used Sentinel-1 Synthetic Aperture Radar (SAR) images, and unlike common methods of water extraction based on optical images, they are suitable for long-term monitoring because they are less affected by the weather conditions. We built four AI models such as Support Vector Machine (SVM), Random Forest (RF), Artificial Neural Network (ANN), and Automated Machine Learning (AutoML) using drone images, sentinel-1 SAR and DSM data. There are total of 22 reservoirs of less than 1 million tons for the study, including small and medium-sized reservoirs with an effective storage capacity of less than 300,000 tons. 45 images from 22 reservoirs were used for model training and verification, and the results show that the AutoML model was 0.01 to 0.03 better in the water Intersection over Union (IoU) than the other three models, with Accuracy=0.92 and mIoU=0.81 in a test. As the result, AutoML performed as well as the classical machine learning methods and it is expected that the applicability of the water-body extraction technique by AutoML to monitor reservoirs automatically.