• 제목/요약/키워드: U-net-based convolutional neural networks (CNNs)

검색결과 7건 처리시간 0.01초

Automatic Volumetric Brain Tumor Segmentation using Convolutional Neural Networks

  • Yavorskyi, Vladyslav;Sull, Sanghoon
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2019년도 춘계학술대회
    • /
    • pp.432-435
    • /
    • 2019
  • Convolutional Neural Networks (CNNs) have recently been gaining popularity in the medical image analysis field because of their image segmentation capabilities. In this paper, we present a CNN that performs automated brain tumor segmentations of sparsely annotated 3D Magnetic Resonance Imaging (MRI) scans. Our CNN is based on 3D U-net architecture, and it includes separate Dilated and Depth-wise Convolutions. It is fully-trained on the BraTS 2018 data set, and it produces more accurate results even when compared to the winners of the BraTS 2017 competition despite having a significantly smaller amount of parameters.

  • PDF

농림위성 활용을 위한 산불 피해지 분류 딥러닝 알고리즘 평가 (Deep Learning-based Forest Fire Classification Evaluation for Application of CAS500-4)

  • 차성은;원명수;장근창;김경민;김원국;백승일;임중빈
    • 대한원격탐사학회지
    • /
    • 제38권6_1호
    • /
    • pp.1273-1283
    • /
    • 2022
  • 최근 기후변화로 인해 중대형 산불이 빈번하게 발생하여 매년 인명 및 재산피해로 이어지고 있다. 원격탐사를 활용한 산불 피해지 모니터링 기법은 신속한 정보와 대규모 피해지의 객관적인 결과를 취득할 수 있다. 본 연구에서는 산불 피해지를 분류하기 위해 Sentinel-2의 분광대역, 정규식생지수(normalized difference vegetation index, NDVI), 정규수역지수(normalized difference water index, NDWI)를 활용하여 2022년 3월 발생한 강릉·동해 산불 피해지를 대상으로 U-net 기반 convolutional neural networks (CNNs) 딥러닝 모형을 모의하였다. 산불 피해지 분류 결과 강릉·동해 산불 피해지의 경우 97.3% (f1=0.486, IoU=0.946)로 분류 정확도가 높았으나, 과적합(overfitting)의 가능성을 배제하기 어려워 울진·삼척 지역으로 동일한 모형을 적용하였다. 그 결과, 국립산림과학원에서 보고한 산불 피해 면적과의 중첩도가 74.4%로 확인되어 모형의 불확도를 고려하더라도 높은 수준의 정확도를 확인하였다. 본 연구는 농림위성과 유사한 분광대역을 선택적으로 사용하였으며, Sentinel-2 영상을 활용한 산불 피해지 분류가 정량적으로 가능함을 시사한다.

Fully Automatic Segmentation of Acute Ischemic Lesions on Diffusion-Weighted Imaging Using Convolutional Neural Networks: Comparison with Conventional Algorithms

  • Ilsang Woo;Areum Lee;Seung Chai Jung;Hyunna Lee;Namkug Kim;Se Jin Cho;Donghyun Kim;Jungbin Lee;Leonard Sunwoo;Dong-Wha Kang
    • Korean Journal of Radiology
    • /
    • 제20권8호
    • /
    • pp.1275-1284
    • /
    • 2019
  • Objective: To develop algorithms using convolutional neural networks (CNNs) for automatic segmentation of acute ischemic lesions on diffusion-weighted imaging (DWI) and compare them with conventional algorithms, including a thresholding-based segmentation. Materials and Methods: Between September 2005 and August 2015, 429 patients presenting with acute cerebral ischemia (training:validation:test set = 246:89:94) were retrospectively enrolled in this study, which was performed under Institutional Review Board approval. Ground truth segmentations for acute ischemic lesions on DWI were manually drawn under the consensus of two expert radiologists. CNN algorithms were developed using two-dimensional U-Net with squeeze-and-excitation blocks (U-Net) and a DenseNet with squeeze-and-excitation blocks (DenseNet) with squeeze-and-excitation operations for automatic segmentation of acute ischemic lesions on DWI. The CNN algorithms were compared with conventional algorithms based on DWI and the apparent diffusion coefficient (ADC) signal intensity. The performances of the algorithms were assessed using the Dice index with 5-fold cross-validation. The Dice indices were analyzed according to infarct volumes (< 10 mL, ≥ 10 mL), number of infarcts (≤ 5, 6-10, ≥ 11), and b-value of 1000 (b1000) signal intensities (< 50, 50-100, > 100), time intervals to DWI, and DWI protocols. Results: The CNN algorithms were significantly superior to conventional algorithms (p < 0.001). Dice indices for the CNN algorithms were 0.85 for U-Net and DenseNet and 0.86 for an ensemble of U-Net and DenseNet, while the indices were 0.58 for ADC-b1000 and b1000-ADC and 0.52 for the commercial ADC algorithm. The Dice indices for small and large lesions, respectively, were 0.81 and 0.88 with U-Net, 0.80 and 0.88 with DenseNet, and 0.82 and 0.89 with the ensemble of U-Net and DenseNet. The CNN algorithms showed significant differences in Dice indices according to infarct volumes (p < 0.001). Conclusion: The CNN algorithm for automatic segmentation of acute ischemic lesions on DWI achieved Dice indices greater than or equal to 0.85 and showed superior performance to conventional algorithms.

Multi-Scale Dilation Convolution Feature Fusion (MsDC-FF) Technique for CNN-Based Black Ice Detection

  • Sun-Kyoung KANG
    • 한국인공지능학회지
    • /
    • 제11권3호
    • /
    • pp.17-22
    • /
    • 2023
  • In this paper, we propose a black ice detection system using Convolutional Neural Networks (CNNs). Black ice poses a serious threat to road safety, particularly during winter conditions. To overcome this problem, we introduce a CNN-based architecture for real-time black ice detection with an encoder-decoder network, specifically designed for real-time black ice detection using thermal images. To train the network, we establish a specialized experimental platform to capture thermal images of various black ice formations on diverse road surfaces, including cement and asphalt. This enables us to curate a comprehensive dataset of thermal road black ice images for a training and evaluation purpose. Additionally, in order to enhance the accuracy of black ice detection, we propose a multi-scale dilation convolution feature fusion (MsDC-FF) technique. This proposed technique dynamically adjusts the dilation ratios based on the input image's resolution, improving the network's ability to capture fine-grained details. Experimental results demonstrate the superior performance of our proposed network model compared to conventional image segmentation models. Our model achieved an mIoU of 95.93%, while LinkNet achieved an mIoU of 95.39%. Therefore, it is concluded that the proposed model in this paper could offer a promising solution for real-time black ice detection, thereby enhancing road safety during winter conditions.

Deep learning-based apical lesion segmentation from panoramic radiographs

  • Il-Seok, Song;Hak-Kyun, Shin;Ju-Hee, Kang;Jo-Eun, Kim;Kyung-Hoe, Huh;Won-Jin, Yi;Sam-Sun, Lee;Min-Suk, Heo
    • Imaging Science in Dentistry
    • /
    • 제52권4호
    • /
    • pp.351-357
    • /
    • 2022
  • Purpose: Convolutional neural networks (CNNs) have rapidly emerged as one of the most promising artificial intelligence methods in the field of medical and dental research. CNNs can provide an effective diagnostic methodology allowing for the detection of early-staged diseases. Therefore, this study aimed to evaluate the performance of a deep CNN algorithm for apical lesion segmentation from panoramic radiographs. Materials and Methods: A total of 1000 panoramic images showing apical lesions were separated into training (n=800, 80%), validation (n=100, 10%), and test (n=100, 10%) datasets. The performance of identifying apical lesions was evaluated by calculating the precision, recall, and F1-score. Results: In the test group of 180 apical lesions, 147 lesions were segmented from panoramic radiographs with an intersection over union (IoU) threshold of 0.3. The F1-score values, as a measure of performance, were 0.828, 0.815, and 0.742, respectively, with IoU thresholds of 0.3, 0.4, and 0.5. Conclusion: This study showed the potential utility of a deep learning-guided approach for the segmentation of apical lesions. The deep CNN algorithm using U-Net demonstrated considerably high performance in detecting apical lesions.

Contactless User Identification System using Multi-channel Palm Images Facilitated by Triple Attention U-Net and CNN Classifier Ensemble Models

  • Kim, Inki;Kim, Beomjun;Woo, Sunghee;Gwak, Jeonghwan
    • 한국컴퓨터정보학회논문지
    • /
    • 제27권3호
    • /
    • pp.33-43
    • /
    • 2022
  • 본 논문에서는 기존의 스마트폰 카메라 센서를 사용하여 비접촉식 손바닥 기반 사용자 식별 시스템을 구축하기 위해 Attention U-Net 모델과 사전 훈련된 컨볼루션 신경망(CNN)이 있는 다채널 손바닥 이미지를 이용한 앙상블 모델을 제안한다. Attention U-Net 모델은 손바닥(손가락 포함), 손바닥(손바닥 미포함) 및 손금을 포함한 관심 영역을 추출하는 데 사용되며, 이는 앙상블 분류기로 입력되는 멀티채널 이미지를 생성하기 위해 결합 된다. 생성된 데이터는 제안된 손바닥 정보 기반 사용자 식별 시스템에 입력되며 사전 훈련된 CNN 모델 3개를 앙상블 한 분류기를 사용하여 클래스를 예측한다. 제안된 모델은 각각 98.60%, 98.61%, 98.61%, 98.61%의 분류 정확도, 정밀도, 재현율, F1-Score를 달성할 수 있음을 입증하며, 이는 저렴한 이미지 센서를 사용하고 있음에도 불구하고 제안된 모델이 효과적이라는 것을 나타낸다. 본 논문에서 제안하는 모델은 COVID-19 펜데믹 상황에서 기존 시스템에 비하여 높은 안전성과 신뢰성으로 대안이 될 수 있다.

Deep learning-based automatic segmentation of the mandibular canal on panoramic radiographs: A multi-device study

  • Moe Thu Zar Aung;Sang-Heon Lim;Jiyong Han;Su Yang;Ju-Hee Kang;Jo-Eun Kim;Kyung-Hoe Huh;Won-Jin Yi;Min-Suk Heo;Sam-Sun Lee
    • Imaging Science in Dentistry
    • /
    • 제54권1호
    • /
    • pp.81-91
    • /
    • 2024
  • Purpose: The objective of this study was to propose a deep-learning model for the detection of the mandibular canal on dental panoramic radiographs. Materials and Methods: A total of 2,100 panoramic radiographs (PANs) were collected from 3 different machines: RAYSCAN Alpha (n=700, PAN A), OP-100 (n=700, PAN B), and CS8100 (n=700, PAN C). Initially, an oral and maxillofacial radiologist coarsely annotated the mandibular canals. For deep learning analysis, convolutional neural networks (CNNs) utilizing U-Net architecture were employed for automated canal segmentation. Seven independent networks were trained using training sets representing all possible combinations of the 3 groups. These networks were then assessed using a hold-out test dataset. Results: Among the 7 networks evaluated, the network trained with all 3 available groups achieved an average precision of 90.6%, a recall of 87.4%, and a Dice similarity coefficient (DSC) of 88.9%. The 3 networks trained using each of the 3 possible 2-group combinations also demonstrated reliable performance for mandibular canal segmentation, as follows: 1) PAN A and B exhibited a mean DSC of 87.9%, 2) PAN A and C displayed a mean DSC of 87.8%, and 3) PAN B and C demonstrated a mean DSC of 88.4%. Conclusion: This multi-device study indicated that the examined CNN-based deep learning approach can achieve excellent canal segmentation performance, with a DSC exceeding 88%. Furthermore, the study highlighted the importance of considering the characteristics of panoramic radiographs when developing a robust deep-learning network, rather than depending solely on the size of the dataset.