• Title/Summary/Keyword: Dice Similarity Coefficient (DSC)

Search Result 14, Processing Time 0.023 seconds

Substitutability of Noise Reduction Algorithm based Conventional Thresholding Technique to U-Net Model for Pancreas Segmentation (이자 분할을 위한 노이즈 제거 알고리즘 기반 기존 임계값 기법 대비 U-Net 모델의 대체 가능성)

  • Sewon Lim;Youngjin Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.5
    • /
    • pp.663-670
    • /
    • 2023
  • In this study, we aimed to perform a comparative evaluation using quantitative factors between a region-growing based segmentation with noise reduction algorithms and a U-Net based segmentation. Initially, we applied median filter, median modified Wiener filter, and fast non-local means algorithm to computed tomography (CT) images, followed by region-growing based segmentation. Additionally, we trained a U-Net based segmentation model to perform segmentation. Subsequently, to compare and evaluate the segmentation performance of cases with noise reduction algorithms and cases with U-Net, we measured root mean square error (RMSE) and peak signal to noise ratio (PSNR), universal quality image index (UQI), and dice similarity coefficient (DSC). The results showed that using U-Net for segmentation yielded the most improved performance. The values of RMSE, PSNR, UQI, and DSC were measured as 0.063, 72.11, 0.841, and 0.982 respectively, which indicated improvements of 1.97, 1.09, 5.30, and 1.99 times compared to noisy images. In conclusion, U-Net proved to be effective in enhancing segmentation performance compared to noise reduction algorithms in CT images.

Automatic Segmentation of the Mandible using Shape-Constrained Information in Cranio-Maxillo-Facial CBCT Images (두개악안면 CBCT 영상에서 형상제약 정보를 사용한 하악골 자동 분할)

  • Kim, Joojin;Lee, Min Jin;Hong, Helen
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.5
    • /
    • pp.19-27
    • /
    • 2017
  • In this paper, we propose an automatic segmentation method of the mandible using shape-constrained information in cranio-maxillo-facial CBCT images. The proposed method consists of the following two steps. First, the mandible segmentation based on the global shape information is performed through the statistical shape model generated using the MDCT images. Second, improvement of mandible segmentation is performed considering the local shape information and intensity characteristics of the mandible. To evaluate the performance of the proposed method, the proposed method was evaluated qualitatively and quantitatively based on the results of manual segmentation by expert. Experimental results show that the Dice Similarity Coefficient of the proposed method was 95.64% and 90.97%, respectively, in the mandible body region including the narrow region of large curvature and the condyle region with large positional variance.

Corneal Ulcer Region Detection With Semantic Segmentation Using Deep Learning

  • Im, Jinhyuk;Kim, Daewon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.9
    • /
    • pp.1-12
    • /
    • 2022
  • Traditional methods of measuring corneal ulcers were difficult to present objective basis for diagnosis because of the subjective judgment of the medical staff through photographs taken with special equipment. In this paper, we propose a method to detect the ulcer area on a pixel basis in corneal ulcer images using a semantic segmentation model. In order to solve this problem, we performed the experiment to detect the ulcer area based on the DeepLab model which has the highest performance in semantic segmentation model. For the experiment, the training and test data were selected and the backbone network of DeepLab model which set as Xception and ResNet, respectively were evaluated and compared the performances. We used Dice similarity coefficient and IoU value as an indicator to evaluate the performances. Experimental results show that when 'crop & resized' images are added to the dataset, it segment the ulcer area with an average accuracy about 93% of Dice similarity coefficient on the DeepLab model with ResNet101 as the backbone network. This study shows that the semantic segmentation model used for object detection also has an ability to make significant results when classifying objects with irregular shapes such as corneal ulcers. Ultimately, we will perform the extension of datasets and experiment with adaptive learning methods through future studies so that they can be implemented in real medical diagnosis environment.

Comparative Analysis of Segmentation Methods in Psoriasis Area (건선 영역 분할기법 비교분석)

  • Yoo, Hyun-Jong;Lee, Ji-Won;Moon, Cho-I;Kim, Eun-Bin;Baek, Yoo-Sang;Jang, Sang-Hoon;Lee, OnSeok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.657-659
    • /
    • 2019
  • 본 논문에서는 피부 이미지에서 건선 병변만을 가장 효과적으로 분할 할 수 있는 분할기법 선별을 목표로 한다. Interactive graph cuts (IGC)와 Level set method (LSM)를 사용하여 건선 영역을 분할한 후 Jaccard Index (JI)와 Dice Similarity Coefficient (DSC)을 사용하여 건선 영역에 효과적인 분할 방법을 제안한다.

Three-Dimensional Visualization of Medical Image using Image Segmentation Algorithm based on Deep Learning (딥 러닝 기반의 영상분할 알고리즘을 이용한 의료영상 3차원 시각화에 관한 연구)

  • Lim, SangHeon;Kim, YoungJae;Kim, Kwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.468-475
    • /
    • 2020
  • In this paper, we proposed a three-dimensional visualization system for medical images in augmented reality based on deep learning. In the proposed system, the artificial neural network model performed fully automatic segmentation of the region of lung and pulmonary nodule from chest CT images. After applying the three-dimensional volume rendering method to the segmented images, it was visualized in augmented reality devices. As a result of the experiment, when nodules were present in the region of lung, it could be easily distinguished with the naked eye. Also, the location and shape of the lesions were intuitively confirmed. The evaluation was accomplished by comparing automated segmentation results of the test dataset to the manual segmented image. Through the evaluation of the segmentation model, we obtained the region of lung DSC (Dice Similarity Coefficient) of 98.77%, precision of 98.45%, recall of 99.10%. And the region of pulmonary nodule DSC of 91.88%, precision of 93.05%, recall of 90.94%. If this proposed system will be applied in medical fields such as medical practice and medical education, it is expected that it can contribute to custom organ modeling, lesion analysis, and surgical education and training of patients.

A Comparative Performance Analysis of Segmentation Models for Lumbar Key-points Extraction (요추 특징점 추출을 위한 영역 분할 모델의 성능 비교 분석)

  • Seunghee Yoo;Minho Choi ;Jun-Su Jang
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.5
    • /
    • pp.354-361
    • /
    • 2023
  • Most of spinal diseases are diagnosed based on the subjective judgment of a specialist, so numerous studies have been conducted to find objectivity by automating the diagnosis process using deep learning. In this paper, we propose a method that combines segmentation and feature extraction, which are frequently used techniques for diagnosing spinal diseases. Four models, U-Net, U-Net++, DeepLabv3+, and M-Net were trained and compared using 1000 X-ray images, and key-points were derived using Douglas-Peucker algorithms. For evaluation, Dice Similarity Coefficient(DSC), Intersection over Union(IoU), precision, recall, and area under precision-recall curve evaluation metrics were used and U-Net++ showed the best performance in all metrics with an average DSC of 0.9724. For the average Euclidean distance between estimated key-points and ground truth, U-Net was the best, followed by U-Net++. However the difference in average distance was about 0.1 pixels, which is not significant. The results suggest that it is possible to extract key-points based on segmentation and that it can be used to accurately diagnose various spinal diseases, including spondylolisthesis, with consistent criteria.

Automated Segmentation of Left Ventricular Myocardium on Cardiac Computed Tomography Using Deep Learning

  • Hyun Jung Koo;June-Goo Lee;Ji Yeon Ko;Gaeun Lee;Joon-Won Kang;Young-Hak Kim;Dong Hyun Yang
    • Korean Journal of Radiology
    • /
    • v.21 no.6
    • /
    • pp.660-669
    • /
    • 2020
  • Objective: To evaluate the accuracy of a deep learning-based automated segmentation of the left ventricle (LV) myocardium using cardiac CT. Materials and Methods: To develop a fully automated algorithm, 100 subjects with coronary artery disease were randomly selected as a development set (50 training / 20 validation / 30 internal test). An experienced cardiac radiologist generated the manual segmentation of the development set. The trained model was evaluated using 1000 validation set generated by an experienced technician. Visual assessment was performed to compare the manual and automatic segmentations. In a quantitative analysis, sensitivity and specificity were calculated according to the number of pixels where two three-dimensional masks of the manual and deep learning segmentations overlapped. Similarity indices, such as the Dice similarity coefficient (DSC), were used to evaluate the margin of each segmented masks. Results: The sensitivity and specificity of automated segmentation for each segment (1-16 segments) were high (85.5-100.0%). The DSC was 88.3 ± 6.2%. Among randomly selected 100 cases, all manual segmentation and deep learning masks for visual analysis were classified as very accurate to mostly accurate and there were no inaccurate cases (manual vs. deep learning: very accurate, 31 vs. 53; accurate, 64 vs. 39; mostly accurate, 15 vs. 8). The number of very accurate cases for deep learning masks was greater than that for manually segmented masks. Conclusion: We present deep learning-based automatic segmentation of the LV myocardium and the results are comparable to manual segmentation data with high sensitivity, specificity, and high similarity scores.

Inter-fractional Target Displacement in the Prostate Image-Guided Radiotherapy using Cone Beam Computed Tomography (전립선암 영상유도 방사선 치료시 골반내장기의 체적변화에 따른 표적장기의 변화)

  • Dong, Kap Sang;Back, Chang Wook;Jeong, Yun Jeong;Bae, Jae Beom;Choi, Young Eun;Sung, Ki Hoon
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.2
    • /
    • pp.161-169
    • /
    • 2016
  • Purpose : To quantify the inter-fractional variation in prostate displacement and their dosimetric effects for prostate cancer treatment. Materials and Methods : A total of 176 daily cone-beam CT (CBCT) sets acquired for 6 prostate cancer patients treated with volumetric-modulated arc therapy (VMAT) were retrospectively reviewed. For each patient, the planning CT (pCT) was registered to each daily CBCT by aligning the bony anatomy. The prostate, rectum, and bladder were delineated on daily CBCT, and the contours of these organs in the pCT were copied to the daily CBCT. The concordance of prostate displacement, deformation, and size variation between pCT and daily CBCT was evaluated using the Dice similarity coefficient (DSC). Results : The mean volume of prostate was 37.2 cm3 in the initial pCT, and the variation was around ${\pm}5%$ during the entire course of treatment for all patients. The mean DSC was 89.9%, ranging from 70% to 100% for prostate displacement. Although the volume change of bladder and rectum per treatment fraction did not show any correlation with the value of DSC (r=-0.084, p=0.268 and r=-0.162, p=0.032, respectively), a decrease in the DSC value was observed with increasing volume change of the bladder and rectum (r=-0.230,p=0.049 and r=-0.240,p=0.020, respectively). Conclusion : Consistency of the volume of the bladder and rectum cannot guarantee the accuracy of the treatment. Our results suggest that patient setup with the registration between the pCT and daily CBCT should be considered aligning soft tissue.

  • PDF

Enhanced Lung Cancer Segmentation with Deep Supervision and Hybrid Lesion Focal Loss in Chest CT Images (흉부 CT 영상에서 심층 감독 및 하이브리드 병변 초점 손실 함수를 활용한 폐암 분할 개선)

  • Min Jin Lee;Yoon-Seon Oh;Helen Hong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.1
    • /
    • pp.11-17
    • /
    • 2024
  • Lung cancer segmentation in chest CT images is challenging due to the varying sizes of tumors and the presence of surrounding structures with similar intensity values. To address these issues, we propose a lung cancer segmentation network that incorporates deep supervision and utilizes UNet3+ as the backbone. Additionally, we propose a hybrid lesion focal loss function comprising three components: pixel-based, region-based, and shape-based, which allows us to focus on the smaller tumor regions relative to the background and consider shape information for handling ambiguous boundaries. We validate our proposed method through comparative experiments with UNet and UNet3+ and demonstrate that our proposed method achieves superior performance in terms of Dice Similarity Coefficient (DSC) for tumors of all sizes.

Deep learning-based automatic segmentation of the mandibular canal on panoramic radiographs: A multi-device study

  • Moe Thu Zar Aung;Sang-Heon Lim;Jiyong Han;Su Yang;Ju-Hee Kang;Jo-Eun Kim;Kyung-Hoe Huh;Won-Jin Yi;Min-Suk Heo;Sam-Sun Lee
    • Imaging Science in Dentistry
    • /
    • v.54 no.1
    • /
    • pp.81-91
    • /
    • 2024
  • Purpose: The objective of this study was to propose a deep-learning model for the detection of the mandibular canal on dental panoramic radiographs. Materials and Methods: A total of 2,100 panoramic radiographs (PANs) were collected from 3 different machines: RAYSCAN Alpha (n=700, PAN A), OP-100 (n=700, PAN B), and CS8100 (n=700, PAN C). Initially, an oral and maxillofacial radiologist coarsely annotated the mandibular canals. For deep learning analysis, convolutional neural networks (CNNs) utilizing U-Net architecture were employed for automated canal segmentation. Seven independent networks were trained using training sets representing all possible combinations of the 3 groups. These networks were then assessed using a hold-out test dataset. Results: Among the 7 networks evaluated, the network trained with all 3 available groups achieved an average precision of 90.6%, a recall of 87.4%, and a Dice similarity coefficient (DSC) of 88.9%. The 3 networks trained using each of the 3 possible 2-group combinations also demonstrated reliable performance for mandibular canal segmentation, as follows: 1) PAN A and B exhibited a mean DSC of 87.9%, 2) PAN A and C displayed a mean DSC of 87.8%, and 3) PAN B and C demonstrated a mean DSC of 88.4%. Conclusion: This multi-device study indicated that the examined CNN-based deep learning approach can achieve excellent canal segmentation performance, with a DSC exceeding 88%. Furthermore, the study highlighted the importance of considering the characteristics of panoramic radiographs when developing a robust deep-learning network, rather than depending solely on the size of the dataset.