• Title/Summary/Keyword: atrous spatial pyramid pooling

Search Result 6, Processing Time 0.021 seconds

Skin Lesion Segmentation with Codec Structure Based Upper and Lower Layer Feature Fusion Mechanism

  • Yang, Cheng;Lu, GuanMing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.1
    • /
    • pp.60-79
    • /
    • 2022
  • The U-Net architecture-based segmentation models attained remarkable performance in numerous medical image segmentation missions like skin lesion segmentation. Nevertheless, the resolution gradually decreases and the loss of spatial information increases with deeper network. The fusion of adjacent layers is not enough to make up for the lost spatial information, thus resulting in errors of segmentation boundary so as to decline the accuracy of segmentation. To tackle the issue, we propose a new deep learning-based segmentation model. In the decoding stage, the feature channels of each decoding unit are concatenated with all the feature channels of the upper coding unit. Which is done in order to ensure the segmentation effect by integrating spatial and semantic information, and promotes the robustness and generalization of our model by combining the atrous spatial pyramid pooling (ASPP) module and channel attention module (CAM). Extensive experiments on ISIC2016 and ISIC2017 common datasets proved that our model implements well and outperforms compared segmentation models for skin lesion segmentation.

ASPPMVSNet: A high-receptive-field multiview stereo network for dense three-dimensional reconstruction

  • Saleh Saeed;Sungjun Lee;Yongju Cho;Unsang Park
    • ETRI Journal
    • /
    • v.44 no.6
    • /
    • pp.1034-1046
    • /
    • 2022
  • The learning-based multiview stereo (MVS) methods for three-dimensional (3D) reconstruction generally use 3D volumes for depth inference. The quality of the reconstructed depth maps and the corresponding point clouds is directly influenced by the spatial resolution of the 3D volume. Consequently, these methods produce point clouds with sparse local regions because of the lack of the memory required to encode a high volume of information. Here, we apply the atrous spatial pyramid pooling (ASPP) module in MVS methods to obtain dense feature maps with multiscale, long-range, contextual information using high receptive fields. For a given 3D volume with the same spatial resolution as that in the MVS methods, the dense feature maps from the ASPP module encoded with superior information can produce dense point clouds without a high memory footprint. Furthermore, we propose a 3D loss for training the MVS networks, which improves the predicted depth values by 24.44%. The ASPP module provides state-of-the-art qualitative results by constructing relatively dense point clouds, which improves the DTU MVS dataset benchmarks by 2.25% compared with those achieved in the previous MVS methods.

Depth Map Estimation Model Using 3D Feature Volume (3차원 특징볼륨을 이용한 깊이영상 생성 모델)

  • Shin, Soo-Yeon;Kim, Dong-Myung;Suh, Jae-Won
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.11
    • /
    • pp.447-454
    • /
    • 2018
  • This paper proposes a depth image generation algorithm of stereo images using a deep learning model composed of a CNN (convolutional neural network). The proposed algorithm consists of a feature extraction unit which extracts the main features of each parallax image and a depth learning unit which learns the parallax information using extracted features. First, the feature extraction unit extracts a feature map for each parallax image through the Xception module and the ASPP(Atrous spatial pyramid pooling) module, which are composed of 2D CNN layers. Then, the feature map for each parallax is accumulated in 3D form according to the time difference and the depth image is estimated after passing through the depth learning unit for learning the depth estimation weight through 3D CNN. The proposed algorithm estimates the depth of object region more accurately than other algorithms.

COVID-19 Lung CT Image Recognition (COVID-19 폐 CT 이미지 인식)

  • Su, Jingjie;Kim, Kang-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.3
    • /
    • pp.529-536
    • /
    • 2022
  • In the past two years, Severe Acute Respiratory Syndrome Coronavirus-2(SARS-CoV-2) has been hitting more and more to people. This paper proposes a novel U-Net Convolutional Neural Network to classify and segment COVID-19 lung CT images, which contains Sub Coding Block (SCB), Atrous Spatial Pyramid Pooling(ASPP) and Attention Gate(AG). Three different models such as FCN, U-Net and U-Net-SCB are designed to compare the proposed model and the best optimizer and atrous rate are chosen for the proposed model. The simulation results show that the proposed U-Net-MMFE has the best Dice segmentation coefficient of 94.79% for the COVID-19 CT scan digital image dataset compared with other segmentation models when atrous rate is 12 and the optimizer is Adam.

Saliency-Assisted Collaborative Learning Network for Road Scene Semantic Segmentation

  • Haifeng Sima;Yushuang Xu;Minmin Du;Meng Gao;Jing Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.861-880
    • /
    • 2023
  • Semantic segmentation of road scene is the key technology of autonomous driving, and the improvement of convolutional neural network architecture promotes the improvement of model segmentation performance. The existing convolutional neural network has the simplification of learning knowledge and the complexity of the model. To address this issue, we proposed a road scene semantic segmentation algorithm based on multi-task collaborative learning. Firstly, a depthwise separable convolution atrous spatial pyramid pooling is proposed to reduce model complexity. Secondly, a collaborative learning framework is proposed involved with saliency detection, and the joint loss function is defined using homoscedastic uncertainty to meet the new learning model. Experiments are conducted on the road and nature scenes datasets. The proposed method achieves 70.94% and 64.90% mIoU on Cityscapes and PASCAL VOC 2012 datasets, respectively. Qualitatively, Compared to methods with excellent performance, the method proposed in this paper has significant advantages in the segmentation of fine targets and boundaries.

Boundary and Reverse Attention Module for Lung Nodule Segmentation in CT Images (CT 영상에서 폐 결절 분할을 위한 경계 및 역 어텐션 기법)

  • Hwang, Gyeongyeon;Ji, Yewon;Yoon, Hakyoung;Lee, Sang Jun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.5
    • /
    • pp.265-272
    • /
    • 2022
  • As the risk of lung cancer has increased, early-stage detection and treatment of cancers have received a lot of attention. Among various medical imaging approaches, computer tomography (CT) has been widely utilized to examine the size and growth rate of lung nodules. However, the process of manual examination is a time-consuming task, and it causes physical and mental fatigue for medical professionals. Recently, many computer-aided diagnostic methods have been proposed to reduce the workload of medical professionals. In recent studies, encoder-decoder architectures have shown reliable performances in medical image segmentation, and it is adopted to predict lesion candidates. However, localizing nodules in lung CT images is a challenging problem due to the extremely small sizes and unstructured shapes of nodules. To solve these problems, we utilize atrous spatial pyramid pooling (ASPP) to minimize the loss of information for a general U-Net baseline model to extract rich representations from various receptive fields. Moreover, we propose mixed-up attention mechanism of reverse, boundary and convolutional block attention module (CBAM) to improve the accuracy of segmentation small scale of various shapes. The performance of the proposed model is compared with several previous attention mechanisms on the LIDC-IDRI dataset, and experimental results demonstrate that reverse, boundary, and CBAM (RB-CBAM) are effective in the segmentation of small nodules.