• Title/Summary/Keyword: Automated segmentation

Search Result 124, Processing Time 0.022 seconds

Automated Facial Wrinkle Segmentation Scheme Using UNet++

  • Hyeonwoo Kim;Junsuk Lee;Jehyeok, Rew;Eenjun Hwang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.8
    • /
    • pp.2333-2345
    • /
    • 2024
  • Facial wrinkles are widely used to evaluate skin condition or aging for various fields such as skin diagnosis, plastic surgery consultations, and cosmetic recommendations. In order to effectively process facial wrinkles in facial image analysis, accurate wrinkle segmentation is required to identify wrinkled regions. Existing deep learning-based methods have difficulty segmenting fine wrinkles due to insufficient wrinkle data and the imbalance between wrinkle and non-wrinkle data. Therefore, in this paper, we propose a new facial wrinkle segmentation method based on a UNet++ model. Specifically, we construct a new facial wrinkle dataset by manually annotating fine wrinkles across the entire face. We then extract only the skin region from the facial image using a facial landmark point extractor. Lastly, we train the UNet++ model using both dice loss and focal loss to alleviate the class imbalance problem. To validate the effectiveness of the proposed method, we conduct comprehensive experiments using our facial wrinkle dataset. The experimental results showed that the proposed method was superior to the latest wrinkle segmentation method by 9.77%p and 10.04%p in IoU and F1 score, respectively.

Automated Detection and Segmentation of Bone Metastases on Spine MRI Using U-Net: A Multicenter Study

  • Dong Hyun Kim;Jiwoon Seo;Ji Hyun Lee;Eun-Tae Jeon;DongYoung Jeong;Hee Dong Chae;Eugene Lee;Ji Hee Kang;Yoon-Hee Choi;Hyo Jin Kim;Jee Won Chai
    • Korean Journal of Radiology
    • /
    • v.25 no.4
    • /
    • pp.363-373
    • /
    • 2024
  • Objective: To develop and evaluate a deep learning model for automated segmentation and detection of bone metastasis on spinal MRI. Materials and Methods: We included whole spine MRI scans of adult patients with bone metastasis: 662 MRI series from 302 patients (63.5 ± 11.5 years; male:female, 151:151) from three study centers obtained between January 2015 and August 2021 for training and internal testing (random split into 536 and 126 series, respectively) and 49 MRI series from 20 patients (65.9 ± 11.5 years; male:female, 11:9) from another center obtained between January 2018 and August 2020 for external testing. Three sagittal MRI sequences, including non-contrast T1-weighted image (T1), contrast-enhanced T1-weighted Dixon fat-only image (FO), and contrast-enhanced fat-suppressed T1-weighted image (CE), were used. Seven models trained using the 2D and 3D U-Nets were developed with different combinations (T1, FO, CE, T1 + FO, T1 + CE, FO + CE, and T1 + FO + CE). The segmentation performance was evaluated using Dice coefficient, pixel-wise recall, and pixel-wise precision. The detection performance was analyzed using per-lesion sensitivity and a free-response receiver operating characteristic curve. The performance of the model was compared with that of five radiologists using the external test set. Results: The 2D U-Net T1 + CE model exhibited superior segmentation performance in the external test compared to the other models, with a Dice coefficient of 0.699 and pixel-wise recall of 0.653. The T1 + CE model achieved per-lesion sensitivities of 0.828 (497/600) and 0.857 (150/175) for metastases in the internal and external tests, respectively. The radiologists demonstrated a mean per-lesion sensitivity of 0.746 and a mean per-lesion positive predictive value of 0.701 in the external test. Conclusion: The deep learning models proposed for automated segmentation and detection of bone metastases on spinal MRI demonstrated high diagnostic performance.

Automated Ulna and Radius Segmentation model based on Deep Learning on DEXA (DEXA에서 딥러닝 기반의 척골 및 요골 자동 분할 모델)

  • Kim, Young Jae;Park, Sung Jin;Kim, Kyung Rae;Kim, Kwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1407-1416
    • /
    • 2018
  • The purpose of this study was to train a model for the ulna and radius bone segmentation based on Convolutional Neural Networks and to verify the segmentation model. The data consisted of 840 training data, 210 tuning data, and 200 verification data. The learning model for the ulna and radius bone bwas based on U-Net (19 convolutional and 8 maximum pooling) and trained with 8 batch sizes, 0.0001 learning rate, and 200 epochs. As a result, the average sensitivity of the training data was 0.998, the specificity was 0.972, the accuracy was 0.979, and the Dice's similarity coefficient was 0.968. In the validation data, the average sensitivity was 0.961, specificity was 0.978, accuracy was 0.972, and Dice's similarity coefficient was 0.961. The performance of deep convolutional neural network based models for the segmentation was good for ulna and radius bone.

Tongue Image Segmentation via Thresholding and Gray Projection

  • Liu, Weixia;Hu, Jinmei;Li, Zuoyong;Zhang, Zuchang;Ma, Zhongli;Zhang, Daoqiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.945-961
    • /
    • 2019
  • Tongue diagnosis is one of the most important diagnostic methods in Traditional Chinese Medicine (TCM). Tongue image segmentation aims to extract the image object (i.e., tongue body), which plays a key role in the process of manufacturing an automated tongue diagnosis system. It is still challenging, because there exists the personal diversity in tongue appearances such as size, shape, and color. This paper proposes an innovative segmentation method that uses image thresholding, gray projection and active contour model (ACM). Specifically, an initial object region is first extracted by performing image thresholding in HSI (i.e., Hue Saturation Intensity) color space, and subsequent morphological operations. Then, a gray projection technique is used to determine the upper bound of the tongue body root for refining the initial object region. Finally, the contour of the refined object region is smoothed by ACM. Experimental results on a dataset composed of 100 color tongue images showed that the proposed method obtained more accurate segmentation results than other available state-of-the-art methods.

Quality Inspection of Dented Capsule using Curve Fitting-based Image Segmentation

  • Kwon, Ki-Hyeon;Lee, Hyung-Bong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.12
    • /
    • pp.125-130
    • /
    • 2016
  • Automatic quality inspection by computer vision can be applied and give a solution to the pharmaceutical industry field. Pharmaceutical capsule can be easily affected by flaws like dents, cracks, holes, etc. In order to solve the quality inspection problem, it is required computationally efficient image processing technique like thresholding, boundary edge detection and segmentation and some automated systems are available but they are very expensive to use. In this paper, we have developed a dented capsule image processing technique using edge-based image segmentation, TLS(Total Least Squares) curve fitting technique and adopted low cost camera module for capsule image capturing. We have tested and evaluated the accuracy, training and testing time of the classification recognition algorithms like PCA(Principal Component Analysis), ICA(Independent Component Analysis) and SVM(Support Vector Machine) to show the performance. With the result, PCA, ICA has low accuracy, but SVM has good accuracy to use for classifying the dented capsule.

Semi-automated Approach to Hippocampus Segmentation Using Snake from Brain MRI

  • Al Shidaifat, Ala'a Ddin;Al-Shdefat, Ramadan;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.5
    • /
    • pp.566-572
    • /
    • 2014
  • The hippocampus has been known as one of the most important structure related to many neurological disorders, such as Alzheimer's disease. This paper presents the snake model to segment hippocampus from brain MRI. The snake model or active contour model is widely used in medical image processing fields, especially image segmentation they look onto nearby edge, localizing them accurately. We applied a snake model on brain MRI. Then we compared our results with an active shape approach. The results show that hippocampus was successfully segmented by the snake model.

Assembly performance evaluation method for prefabricated steel structures using deep learning and k-nearest neighbors

  • Hyuntae Bang;Byeongjun Yu;Haemin Jeon
    • Smart Structures and Systems
    • /
    • v.32 no.2
    • /
    • pp.111-121
    • /
    • 2023
  • This study proposes an automated assembly performance evaluation method for prefabricated steel structures (PSSs) using machine learning methods. Assembly component images were segmented using a modified version of the receptive field pyramid. By factorizing channel modulation and the receptive field exploration layers of the convolution pyramid, highly accurate segmentation results were obtained. After completing segmentation, the positions of the bolt holes were calculated using various image processing techniques, such as fuzzy-based edge detection, Hough's line detection, and image perspective transformation. By calculating the distance ratio between bolt holes, the assembly performance of the PSS was estimated using the k-nearest neighbors (kNN) algorithm. The effectiveness of the proposed framework was validated using a 3D PSS printing model and a field test. The results indicated that this approach could recognize assembly components with an intersection over union (IoU) of 95% and evaluate assembly performance with an error of less than 5%.

Automatic Segmentation of Femoral Cartilage in Knee MR Images using Multi-atlas-based Locally-weighted Voting (무릎 MR 영상에서 다중 아틀라스 기반 지역적 가중투표를 이용한 대퇴부 연골 자동 분할)

  • Kim, Hyeun A;Kim, Hyeonjin;Lee, Han Sang;Hong, Helen
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.869-877
    • /
    • 2016
  • In this paper, we propose an automated segmentation method of femoral cartilage in knee MR images using multi-atlas-based locally-weighted voting. The proposed method involves two steps. First, to utilize the shape information to show that the femoral cartilage is attached to a femur, the femur is segmented via volume and object-based locally-weighted voting and narrow-band region growing. Second, the object-based affine transformation of the femur is applied to the registration of femoral cartilage, and the femoral cartilage is segmented via multi-atlas shape-based locally-weighted voting. To evaluate the performance of the proposed method, we compared the segmentation results of majority voting method, intensity-based locally-weighted voting method, and the proposed method with manual segmentation results defined by expert. In our experimental results, the newly proposed method avoids a leakage into the neighboring regions having similar intensity of femoral cartilage, and shows improved segmentation accuracy.

ZoomISEG: Interactive Multi-Scale Fusion for Histopathology Whole Slide Image Segmentation (ZoomISEG: 조직 병리학 전체 슬라이드 영상 분할을 위한 대화형 다중스케일 융합)

  • Seonghui Min;Won-Ki Jeong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.3
    • /
    • pp.127-135
    • /
    • 2023
  • Accurate segmentation of histopathology whole slide images (WSIs) is a crucial task for disease diagnosis and treatment planning. However, conventional automated segmentation algorithms may not always be applicable to WSI segmentation due to their large size and variations in tissue appearance, staining, and imaging conditions. Recent advances in interactive segmentation, which combines human expertise with algorithms, have shown promise to improve efficiency and accuracy in WSI segmentation but also presented us with challenging issues. In this paper, we propose a novel interactive segmentation method, ZoomISEG, that leverages multi-resolution WSIs. We demonstrate the efficacy and performance of the proposed method via comparison with conventional single-scale methods and an ablation study. The results confirm that the proposed method can reduce human interaction while achieving accuracy comparable to that of the brute-force approach using the highest-resolution data.

Three-Dimensional Visualization of Medical Image using Image Segmentation Algorithm based on Deep Learning (딥 러닝 기반의 영상분할 알고리즘을 이용한 의료영상 3차원 시각화에 관한 연구)

  • Lim, SangHeon;Kim, YoungJae;Kim, Kwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.468-475
    • /
    • 2020
  • In this paper, we proposed a three-dimensional visualization system for medical images in augmented reality based on deep learning. In the proposed system, the artificial neural network model performed fully automatic segmentation of the region of lung and pulmonary nodule from chest CT images. After applying the three-dimensional volume rendering method to the segmented images, it was visualized in augmented reality devices. As a result of the experiment, when nodules were present in the region of lung, it could be easily distinguished with the naked eye. Also, the location and shape of the lesions were intuitively confirmed. The evaluation was accomplished by comparing automated segmentation results of the test dataset to the manual segmented image. Through the evaluation of the segmentation model, we obtained the region of lung DSC (Dice Similarity Coefficient) of 98.77%, precision of 98.45%, recall of 99.10%. And the region of pulmonary nodule DSC of 91.88%, precision of 93.05%, recall of 90.94%. If this proposed system will be applied in medical fields such as medical practice and medical education, it is expected that it can contribute to custom organ modeling, lesion analysis, and surgical education and training of patients.