• Title/Summary/Keyword: Layer Segmentation

Search Result 113, Processing Time 0.05 seconds

MEDU-Net+: a novel improved U-Net based on multi-scale encoder-decoder for medical image segmentation

  • Zhenzhen Yang;Xue Sun;Yongpeng, Yang;Xinyi Wu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.7
    • /
    • pp.1706-1725
    • /
    • 2024
  • The unique U-shaped structure of U-Net network makes it achieve good performance in image segmentation. This network is a lightweight network with a small number of parameters for small image segmentation datasets. However, when the medical image to be segmented contains a lot of detailed information, the segmentation results cannot fully meet the actual requirements. In order to achieve higher accuracy of medical image segmentation, a novel improved U-Net network architecture called multi-scale encoder-decoder U-Net+ (MEDU-Net+) is proposed in this paper. We design the GoogLeNet for achieving more information at the encoder of the proposed MEDU-Net+, and present the multi-scale feature extraction for fusing semantic information of different scales in the encoder and decoder. Meanwhile, we also introduce the layer-by-layer skip connection to connect the information of each layer, so that there is no need to encode the last layer and return the information. The proposed MEDU-Net+ divides the unknown depth network into each part of deconvolution layer to replace the direct connection of the encoder and decoder in U-Net. In addition, a new combined loss function is proposed to extract more edge information by combining the advantages of the generalized dice and the focal loss functions. Finally, we validate our proposed MEDU-Net+ MEDU-Net+ and other classic medical image segmentation networks on three medical image datasets. The experimental results show that our proposed MEDU-Net+ has prominent superior performance compared with other medical image segmentation networks.

A Segmentation-Based HMM and MLP Hybrid Classifier for English Legal Word Recognition (분할기반 은닉 마르코프 모델과 다층 퍼셉트론 결합 영문수표필기단어 인식시스템)

  • 김계경;김진호;박희주
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.3
    • /
    • pp.200-207
    • /
    • 2001
  • In this paper, we propose an HMM(Hidden Markov modeJ)-MLP(Multi-layer perceptron) hybrid model for recognizing legal words on the English bank check. We adopt an explicit segmentation-based word level architecture to implement an HMM engine with nonscaled and non-normalized symbol vectors. We also introduce an MLP for implicit segmentation-based word recognition. The final recognition model consists of a hybrid combination of the HMM and MLP with a new hybrid probability measure. The main contributions of this model are a novel design of the segmentation-based variable length HMMs and an efficient method of combining two heterogeneous recognition engines. ExperimenLs have been conducted using the legal word database of CENPARMI with encouraging results.

  • PDF

Accuracy evaluation of liver and tumor auto-segmentation in CT images using 2D CoordConv DeepLab V3+ model in radiotherapy

  • An, Na young;Kang, Young-nam
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.5
    • /
    • pp.341-352
    • /
    • 2022
  • Medical image segmentation is the most important task in radiation therapy. Especially, when segmenting medical images, the liver is one of the most difficult organs to segment because it has various shapes and is close to other organs. Therefore, automatic segmentation of the liver in computed tomography (CT) images is a difficult task. Since tumors also have low contrast in surrounding tissues, and the shape, location, size, and number of tumors vary from patient to patient, accurate tumor segmentation takes a long time. In this study, we propose a method algorithm for automatically segmenting the liver and tumor for this purpose. As an advantage of setting the boundaries of the tumor, the liver and tumor were automatically segmented from the CT image using the 2D CoordConv DeepLab V3+ model using the CoordConv layer. For tumors, only cropped liver images were used to improve accuracy. Additionally, to increase the segmentation accuracy, augmentation, preprocess, loss function, and hyperparameter were used to find optimal values. We compared the CoordConv DeepLab v3+ model using the CoordConv layer and the DeepLab V3+ model without the CoordConv layer to determine whether they affected the segmentation accuracy. The data sets used included 131 hepatic tumor segmentation (LiTS) challenge data sets (100 train sets, 16 validation sets, and 15 test sets). Additional learned data were tested using 15 clinical data from Seoul St. Mary's Hospital. The evaluation was compared with the study results learned with a two-dimensional deep learning-based model. Dice values without the CoordConv layer achieved 0.965 ± 0.01 for liver segmentation and 0.925 ± 0.04 for tumor segmentation using the LiTS data set. Results from the clinical data set achieved 0.927 ± 0.02 for liver division and 0.903 ± 0.05 for tumor division. The dice values using the CoordConv layer achieved 0.989 ± 0.02 for liver segmentation and 0.937 ± 0.07 for tumor segmentation using the LiTS data set. Results from the clinical data set achieved 0.944 ± 0.02 for liver division and 0.916 ± 0.18 for tumor division. The use of CoordConv layers improves the segmentation accuracy. The highest of the most recently published values were 0.960 and 0.749 for liver and tumor division, respectively. However, better performance was achieved with 0.989 and 0.937 results for liver and tumor, which would have been used with the algorithm proposed in this study. The algorithm proposed in this study can play a useful role in treatment planning by improving contouring accuracy and reducing time when segmentation evaluation of liver and tumor is performed. And accurate identification of liver anatomy in medical imaging applications, such as surgical planning, as well as radiotherapy, which can leverage the findings of this study, can help clinical evaluation of the risks and benefits of liver intervention.

A Multi-Layer Perceptron for Color Index based Vegetation Segmentation (색상지수 기반의 식물분할을 위한 다층퍼셉트론 신경망)

  • Lee, Moon-Kyu
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.43 no.1
    • /
    • pp.16-25
    • /
    • 2020
  • Vegetation segmentation in a field color image is a process of distinguishing vegetation objects of interests like crops and weeds from a background of soil and/or other residues. The performance of the process is crucial in automatic precision agriculture which includes weed control and crop status monitoring. To facilitate the segmentation, color indices have predominantly been used to transform the color image into its gray-scale image. A thresholding technique like the Otsu method is then applied to distinguish vegetation parts from the background. An obvious demerit of the thresholding based segmentation will be that classification of each pixel into vegetation or background is carried out solely by using the color feature of the pixel itself without taking into account color features of its neighboring pixels. This paper presents a new pixel-based segmentation method which employs a multi-layer perceptron neural network to classify the gray-scale image into vegetation and nonvegetation pixels. The input data of the neural network for each pixel are 2-dimensional gray-level values surrounding the pixel. To generate a gray-scale image from a raw RGB color image, a well-known color index called Excess Green minus Excess Red Index was used. Experimental results using 80 field images of 4 vegetation species demonstrate the superiority of the neural network to existing threshold-based segmentation methods in terms of accuracy, precision, recall, and harmonic mean.

A Deep Learning-Based Image Semantic Segmentation Algorithm

  • Chaoqun, Shen;Zhongliang, Sun
    • Journal of Information Processing Systems
    • /
    • v.19 no.1
    • /
    • pp.98-108
    • /
    • 2023
  • This paper is an attempt to design segmentation method based on fully convolutional networks (FCN) and attention mechanism. The first five layers of the Visual Geometry Group (VGG) 16 network serve as the coding part in the semantic segmentation network structure with the convolutional layer used to replace pooling to reduce loss of image feature extraction information. The up-sampling and deconvolution unit of the FCN is then used as the decoding part in the semantic segmentation network. In the deconvolution process, the skip structure is used to fuse different levels of information and the attention mechanism is incorporated to reduce accuracy loss. Finally, the segmentation results are obtained through pixel layer classification. The results show that our method outperforms the comparison methods in mean pixel accuracy (MPA) and mean intersection over union (MIOU).

A Multi-Layer Graphical Model for Constrained Spectral Segmentation

  • Kim, Tae Hoon;Lee, Kyoung Mu;Lee, Sang Uk
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.07a
    • /
    • pp.437-438
    • /
    • 2011
  • Spectral segmentation is a major trend in image segmentation. Specially, constrained spectral segmentation, inspired by the user-given inputs, remains its challenging task. Since it makes use of the spectrum of the affinity matrix of a given image, its overall quality depends mainly on how to design the graphical model. In this work, we propose a sparse, multi-layer graphical model, where the pixels and the over-segmented regions are the graph nodes. Here, the graph affinities are computed by using the must-link and cannot-link constraints as well as the likelihoods that each node has a specific label. They are then used to simultaneously cluster all pixels and regions into visually coherent groups across all layers in a single multi-layer framework of Normalized Cuts. Although we incorporate only the adjacent connections in the multi-layer graph, the foreground object can be efficiently extracted in the spectral framework. The experimental results demonstrate the relevance of our algorithm as compared to existing popular algorithms.

  • PDF

Layer Segmentation of Retinal OCT Images using Deep Convolutional Encoder-Decoder Network (딥 컨볼루셔널 인코더-디코더 네트워크를 이용한 망막 OCT 영상의 층 분할)

  • Kwon, Oh-Heum;Song, Min-Gyu;Song, Ha-Joo;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.11
    • /
    • pp.1269-1279
    • /
    • 2019
  • In medical image analysis, segmentation is considered as a vital process since it partitions an image into coherent parts and extracts interesting objects from the image. In this paper, we consider automatic segmentations of OCT retinal images to find six layer boundaries using convolutional neural networks. Segmenting retinal images by layer boundaries is very important in diagnosing and predicting progress of eye diseases including diabetic retinopathy, glaucoma, and AMD (age-related macular degeneration). We applied well-known CNN architecture for general image segmentation, called Segnet, U-net, and CNN-S into this problem. We also proposed a shortest path-based algorithm for finding the layer boundaries from the outputs of Segnet and U-net. We analysed their performance on public OCT image data set. The experimental results show that the Segnet combined with the proposed shortest path-based boundary finding algorithm outperforms other two networks.

Skin Lesion Segmentation with Codec Structure Based Upper and Lower Layer Feature Fusion Mechanism

  • Yang, Cheng;Lu, GuanMing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.1
    • /
    • pp.60-79
    • /
    • 2022
  • The U-Net architecture-based segmentation models attained remarkable performance in numerous medical image segmentation missions like skin lesion segmentation. Nevertheless, the resolution gradually decreases and the loss of spatial information increases with deeper network. The fusion of adjacent layers is not enough to make up for the lost spatial information, thus resulting in errors of segmentation boundary so as to decline the accuracy of segmentation. To tackle the issue, we propose a new deep learning-based segmentation model. In the decoding stage, the feature channels of each decoding unit are concatenated with all the feature channels of the upper coding unit. Which is done in order to ensure the segmentation effect by integrating spatial and semantic information, and promotes the robustness and generalization of our model by combining the atrous spatial pyramid pooling (ASPP) module and channel attention module (CAM). Extensive experiments on ISIC2016 and ISIC2017 common datasets proved that our model implements well and outperforms compared segmentation models for skin lesion segmentation.

Parallel Synthesis Algorithm for Layer-based Computer-generated Holograms Using Sparse-field Localization

  • Park, Jongha;Hahn, Joonku;Kim, Hwi
    • Current Optics and Photonics
    • /
    • v.5 no.6
    • /
    • pp.672-679
    • /
    • 2021
  • We propose a high-speed layer-based algorithm for synthesizing computer-generated holograms (CGHs), featuring sparsity-based image segmentation and computational parallelism. The sparsity-based image segmentation of layer-based three-dimensional scenes leads to considerable improvement in the efficiency of CGH computation. The efficiency enhancement of the proposed algorithm is ascribed to the field localization of the fast Fourier transform (FFT), and the consequent reduction of FFT computational complexity.

Comparison of the Effect of Interpolation on the Mask R-CNN Model

  • Young-Pill, Ahn;Kwang Baek, Kim;Hyun-Jun, Park
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.1
    • /
    • pp.17-23
    • /
    • 2023
  • Recently, several high-performance instance segmentation models have used the Mask R-CNN model as a baseline, which reached a historical peak in instance segmentation in 2017. There are numerous derived models using the Mask R-CNN model, and if the performance of Mask R-CNN is improved, the performance of the derived models is also anticipated to improve. The Mask R-CNN uses interpolation to adjust the image size, and the input differs depending on the interpolation method. Therefore, in this study, the performance change of Mask R-CNN was compared when various interpolation methods were applied to the transform layer to improve the performance of Mask R-CNN. To train and evaluate the models, this study utilized the PennFudan and Balloon datasets and the AP metric was used to evaluate model performance. As a result of the experiment, the derived Mask R-CNN model showed the best performance when bicubic interpolation was used in the transform layer.