• Title/Summary/Keyword: Deep learning segmentation

Search Result 406, Processing Time 0.021 seconds

Automated Segmentation of Left Ventricular Myocardium on Cardiac Computed Tomography Using Deep Learning

  • Hyun Jung Koo;June-Goo Lee;Ji Yeon Ko;Gaeun Lee;Joon-Won Kang;Young-Hak Kim;Dong Hyun Yang
    • Korean Journal of Radiology
    • /
    • v.21 no.6
    • /
    • pp.660-669
    • /
    • 2020
  • Objective: To evaluate the accuracy of a deep learning-based automated segmentation of the left ventricle (LV) myocardium using cardiac CT. Materials and Methods: To develop a fully automated algorithm, 100 subjects with coronary artery disease were randomly selected as a development set (50 training / 20 validation / 30 internal test). An experienced cardiac radiologist generated the manual segmentation of the development set. The trained model was evaluated using 1000 validation set generated by an experienced technician. Visual assessment was performed to compare the manual and automatic segmentations. In a quantitative analysis, sensitivity and specificity were calculated according to the number of pixels where two three-dimensional masks of the manual and deep learning segmentations overlapped. Similarity indices, such as the Dice similarity coefficient (DSC), were used to evaluate the margin of each segmented masks. Results: The sensitivity and specificity of automated segmentation for each segment (1-16 segments) were high (85.5-100.0%). The DSC was 88.3 ± 6.2%. Among randomly selected 100 cases, all manual segmentation and deep learning masks for visual analysis were classified as very accurate to mostly accurate and there were no inaccurate cases (manual vs. deep learning: very accurate, 31 vs. 53; accurate, 64 vs. 39; mostly accurate, 15 vs. 8). The number of very accurate cases for deep learning masks was greater than that for manually segmented masks. Conclusion: We present deep learning-based automatic segmentation of the LV myocardium and the results are comparable to manual segmentation data with high sensitivity, specificity, and high similarity scores.

A Comparative Study on Performance of Deep Learning Models for Vision-based Concrete Crack Detection according to Model Types (영상기반 콘크리트 균열 탐지 딥러닝 모델의 유형별 성능 비교)

  • Kim, Byunghyun;Kim, Geonsoon;Jin, Soomin;Cho, Soojin
    • Journal of the Korean Society of Safety
    • /
    • v.34 no.6
    • /
    • pp.50-57
    • /
    • 2019
  • In this study, various types of deep learning models that have been proposed recently are classified according to data input / output types and analyzed to find the deep learning model suitable for constructing a crack detection model. First the deep learning models are classified into image classification model, object segmentation model, object detection model, and instance segmentation model. ResNet-101, DeepLab V2, Faster R-CNN, and Mask R-CNN were selected as representative deep learning model of each type. For the comparison, ResNet-101 was implemented for all the types of deep learning model as a backbone network which serves as a main feature extractor. The four types of deep learning models were trained with 500 crack images taken from real concrete structures and collected from the Internet. The four types of deep learning models showed high accuracy above 94% during the training. Comparative evaluation was conducted using 40 images taken from real concrete structures. The performance of each type of deep learning model was measured using precision and recall. In the experimental result, Mask R-CNN, an instance segmentation deep learning model showed the highest precision and recall on crack detection. Qualitative analysis also shows that Mask R-CNN could detect crack shapes most similarly to the real crack shapes.

Implementation of Image Semantic Segmentation on Android Device using Deep Learning (딥-러닝을 활용한 안드로이드 플랫폼에서의 이미지 시맨틱 분할 구현)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.2
    • /
    • pp.88-91
    • /
    • 2020
  • Image segmentation is the task of partitioning an image into multiple sets of pixels based on some characteristics. The objective is to simplify the image into a representation that is more meaningful and easier to analyze. In this paper, we apply deep-learning to pre-train the learning model, and implement an algorithm that performs image segmentation in real time by extracting frames for the stream input from the Android device. Based on the open source of DeepLab-v3+ implemented in Tensorflow, some convolution filters are modified to improve real-time operation on the Android platform.

Tumor Segmentation in Multimodal Brain MRI Using Deep Learning Approaches

  • Al Shehri, Waleed;Jannah, Najlaa
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.8
    • /
    • pp.343-351
    • /
    • 2022
  • A brain tumor forms when some tissue becomes old or damaged but does not die when it must, preventing new tissue from being born. Manually finding such masses in the brain by analyzing MRI images is challenging and time-consuming for experts. In this study, our main objective is to detect the brain's tumorous part, allowing rapid diagnosis to treat the primary disease instantly. With image processing techniques and deep learning prediction algorithms, our research makes a system capable of finding a tumor in MRI images of a brain automatically and accurately. Our tumor segmentation adopts the U-Net deep learning segmentation on the standard MICCAI BRATS 2018 dataset, which has MRI images with different modalities. The proposed approach was evaluated and achieved Dice Coefficients of 0.9795, 0.9855, 0.9793, and 0.9950 across several test datasets. These results show that the proposed system achieves excellent segmentation of tumors in MRIs using deep learning techniques such as the U-Net algorithm.

Deep Learning in Dental Radiographic Imaging

  • Hyuntae Kim
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.51 no.1
    • /
    • pp.1-10
    • /
    • 2024
  • Deep learning algorithms are becoming more prevalent in dental research because they are utilized in everyday activities. However, dental researchers and clinicians find it challenging to interpret deep learning studies. This review aimed to provide an overview of the general concept of deep learning and current deep learning research in dental radiographic image analysis. In addition, the process of implementing deep learning research is described. Deep-learning-based algorithmic models perform well in classification, object detection, and segmentation tasks, making it possible to automatically diagnose oral lesions and anatomical structures. The deep learning model can enhance the decision-making process for researchers and clinicians. This review may be useful to dental researchers who are currently evaluating and assessing deep learning studies in the field of dentistry.

Image Segmentation of Fuzzy Deep Learning using Fuzzy Logic (퍼지 논리를 이용한 퍼지 딥러닝 영상 분할)

  • Jongjin Park
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.5
    • /
    • pp.71-76
    • /
    • 2023
  • In this paper, we propose a fuzzy U-Net, a fuzzy deep learning model that applies fuzzy logic to improve performance in image segmentation using deep learning. Fuzzy modules using fuzzy logic were combined with U-Net, a deep learning model that showed excellent performance in image segmentation, and various types of fuzzy modules were simulated. The fuzzy module of the proposed deep learning model learns intrinsic and complex rules between feature maps of images and corresponding segmentation results. To this end, the superiority of the proposed method was demonstrated by applying it to dental CBCT data. As a result of the simulation, it can be seen that the performance of the ADD-RELU fuzzy module structure of the model using the addition skip connection in the proposed fuzzy U-Net is 0.7928 for the test dataset and the best.

Bio-Cell Image Segmentation based on Deep Learning using Denoising Autoencoder and Graph Cuts (디노이징 오토인코더와 그래프 컷을 이용한 딥러닝 기반 바이오-셀 영상 분할)

  • Lim, Seon-Ja;Vununu, Caleb;Kwon, Oh-Heum;Lee, Suk-Hwan;Kwon, Ki-Ryoug
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.10
    • /
    • pp.1326-1335
    • /
    • 2021
  • As part of the cell division method, we proposed a method for segmenting images generated by topography microscopes through deep learning-based feature generation and graph segmentation. Hybrid vector shapes preserve the overall shape and boundary information of cells, so most cell shapes can be captured without any post-processing burden. NIH-3T3 and Hela-S3 cells have satisfactory results in cell description preservation. Compared to other deep learning methods, the proposed cell image segmentation method does not require postprocessing. It is also effective in preserving the overall morphology of cells and has shown better results in terms of cell boundary preservation.

Weakly-supervised Semantic Segmentation using Exclusive Multi-Classifier Deep Learning Model (독점 멀티 분류기의 심층 학습 모델을 사용한 약지도 시맨틱 분할)

  • Choi, Hyeon-Joon;Kang, Dong-Joong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.6
    • /
    • pp.227-233
    • /
    • 2019
  • Recently, along with the recent development of deep learning technique, neural networks are achieving success in computer vision filed. Convolutional neural network have shown outstanding performance in not only for a simple image classification task, but also for tasks with high difficulty such as object segmentation and detection. However many such deep learning models are based on supervised-learning, which requires more annotation labels than image-level label. Especially image semantic segmentation model requires pixel-level annotations for training, which is very. To solve these problems, this paper proposes a weakly-supervised semantic segmentation method which requires only image level label to train network. Existing weakly-supervised learning methods have limitations in detecting only specific area of object. In this paper, on the other hand, we use multi-classifier deep learning architecture so that our model recognizes more different parts of objects. The proposed method is evaluated using VOC 2012 validation dataset.

Deep learning framework for bovine iris segmentation

  • Heemoon Yoon;Mira Park;Hayoung Lee;Jisoon An;Taehyun Lee;Sang-Hee Lee
    • Journal of Animal Science and Technology
    • /
    • v.66 no.1
    • /
    • pp.167-177
    • /
    • 2024
  • Iris segmentation is an initial step for identifying the biometrics of animals when establishing a traceability system for livestock. In this study, we propose a deep learning framework for pixel-wise segmentation of bovine iris with a minimized use of annotation labels utilizing the BovineAAEyes80 public dataset. The proposed image segmentation framework encompasses data collection, data preparation, data augmentation selection, training of 15 deep neural network (DNN) models with varying encoder backbones and segmentation decoder DNNs, and evaluation of the models using multiple metrics and graphical segmentation results. This framework aims to provide comprehensive and in-depth information on each model's training and testing outcomes to optimize bovine iris segmentation performance. In the experiment, U-Net with a VGG16 backbone was identified as the optimal combination of encoder and decoder models for the dataset, achieving an accuracy and dice coefficient score of 99.50% and 98.35%, respectively. Notably, the selected model accurately segmented even corrupted images without proper annotation data. This study contributes to the advancement of iris segmentation and the establishment of a reliable DNN training framework.

A Study on Residual U-Net for Semantic Segmentation based on Deep Learning (딥러닝 기반의 Semantic Segmentation을 위한 Residual U-Net에 관한 연구)

  • Shin, Seokyong;Lee, SangHun;Han, HyunHo
    • Journal of Digital Convergence
    • /
    • v.19 no.6
    • /
    • pp.251-258
    • /
    • 2021
  • In this paper, we proposed an encoder-decoder model utilizing residual learning to improve the accuracy of the U-Net-based semantic segmentation method. U-Net is a deep learning-based semantic segmentation method and is mainly used in applications such as autonomous vehicles and medical image analysis. The conventional U-Net occurs loss in feature compression process due to the shallow structure of the encoder. The loss of features causes a lack of context information necessary for classifying objects and has a problem of reducing segmentation accuracy. To improve this, The proposed method efficiently extracted context information through an encoder using residual learning, which is effective in preventing feature loss and gradient vanishing problems in the conventional U-Net. Furthermore, we reduced down-sampling operations in the encoder to reduce the loss of spatial information included in the feature maps. The proposed method showed an improved segmentation result of about 12% compared to the conventional U-Net in the Cityscapes dataset experiment.