• Title/Summary/Keyword: Atrous Convolution

Search Result 7, Processing Time 0.023 seconds

Fingertip Detection through Atrous Convolution and Grad-CAM (Atrous Convolution과 Grad-CAM을 통한 손 끝 탐지)

  • Noh, Dae-Cheol;Kim, Tae-Young
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.5
    • /
    • pp.11-20
    • /
    • 2019
  • With the development of deep learning technology, research is being actively carried out on user-friendly interfaces that are suitable for use in virtual reality or augmented reality applications. To support the interface using the user's hands, this paper proposes a deep learning-based fingertip detection method to enable the tracking of fingertip coordinates to select virtual objects, or to write or draw in the air. After cutting the approximate part of the corresponding fingertip object from the input image with the Grad-CAM, and perform the convolution neural network with Atrous Convolution for the cut image to detect fingertip location. This method is simpler and easier to implement than existing object detection algorithms without requiring a pre-processing for annotating objects. To verify this method we implemented an air writing application and showed that the recognition rate of 81% and the speed of 76 ms were able to write smoothly without delay in the air, making it possible to utilize the application in real time.

Pixel-based crack image segmentation in steel structures using atrous separable convolution neural network

  • Ta, Quoc-Bao;Pham, Quang-Quang;Kim, Yoon-Chul;Kam, Hyeon-Dong;Kim, Jeong-Tae
    • Structural Monitoring and Maintenance
    • /
    • v.9 no.3
    • /
    • pp.289-303
    • /
    • 2022
  • In this study, the impact of assigned pixel labels on the accuracy of crack image identification of steel structures is examined by using an atrous separable convolution neural network (ASCNN). Firstly, images containing fatigue cracks collected from steel structures are classified into four datasets by assigning different pixel labels based on image features. Secondly, the DeepLab v3+ algorithm is used to determine optimal parameters of the ASCNN model by maximizing the average mean-intersection-over-union (mIoU) metric of the datasets. Thirdly, the ASCNN model is trained for various image sizes and hyper-parameters, such as the learning rule, learning rate, and epoch. The optimal parameters of the ASCNN model are determined based on the average mIoU metric. Finally, the trained ASCNN model is evaluated by using 10% untrained images. The result shows that the ASCNN model can segment cracks and other objects in the captured images with an average mIoU of 0.716.

Semantic crack-image identification framework for steel structures using atrous convolution-based Deeplabv3+ Network

  • Ta, Quoc-Bao;Dang, Ngoc-Loi;Kim, Yoon-Chul;Kam, Hyeon-Dong;Kim, Jeong-Tae
    • Smart Structures and Systems
    • /
    • v.30 no.1
    • /
    • pp.17-34
    • /
    • 2022
  • For steel structures, fatigue cracks are critical damage induced by long-term cycle loading and distortion effects. Vision-based crack detection can be a solution to ensure structural integrity and performance by continuous monitoring and non-destructive assessment. A critical issue is to distinguish cracks from other features in captured images which possibly consist of complex backgrounds such as handwritings and marks, which were made to record crack patterns and lengths during periodic visual inspections. This study presents a parametric study on image-based crack identification for orthotropic steel bridge decks using captured images with complicated backgrounds. Firstly, a framework for vision-based crack segmentation using the atrous convolution-based Deeplapv3+ network (ACDN) is designed. Secondly, features on crack images are labeled to build three databanks by consideration of objects in the backgrounds. Thirdly, evaluation metrics computed from the trained ACDN models are utilized to evaluate the effects of obstacles on crack detection results. Finally, various training parameters, including image sizes, hyper-parameters, and the number of training images, are optimized for the ACDN model of crack detection. The result demonstrated that fatigue cracks could be identified by the trained ACDN models, and the accuracy of the crack-detection result was improved by optimizing the training parameters. It enables the applicability of the vision-based technique for early detecting tiny fatigue cracks in steel structures.

Atrous Residual U-Net for Semantic Segmentation in Street Scenes based on Deep Learning (딥러닝 기반 거리 영상의 Semantic Segmentation을 위한 Atrous Residual U-Net)

  • Shin, SeokYong;Lee, SangHun;Han, HyunHo
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.10
    • /
    • pp.45-52
    • /
    • 2021
  • In this paper, we proposed an Atrous Residual U-Net (AR-UNet) to improve the segmentation accuracy of semantic segmentation method based on U-Net. The U-Net is mainly used in fields such as medical image analysis, autonomous vehicles, and remote sensing images. The conventional U-Net lacks extracted features due to the small number of convolution layers in the encoder part. The extracted features are essential for classifying object categories, and if they are insufficient, it causes a problem of lowering the segmentation accuracy. Therefore, to improve this problem, we proposed the AR-UNet using residual learning and ASPP in the encoder. Residual learning improves feature extraction ability and is effective in preventing feature loss and vanishing gradient problems caused by continuous convolutions. In addition, ASPP enables additional feature extraction without reducing the resolution of the feature map. Experiments verified the effectiveness of the AR-UNet with Cityscapes dataset. The experimental results showed that the AR-UNet showed improved segmentation results compared to the conventional U-Net. In this way, AR-UNet can contribute to the advancement of many applications where accuracy is important.

COVID-19 Lung CT Image Recognition (COVID-19 폐 CT 이미지 인식)

  • Su, Jingjie;Kim, Kang-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.3
    • /
    • pp.529-536
    • /
    • 2022
  • In the past two years, Severe Acute Respiratory Syndrome Coronavirus-2(SARS-CoV-2) has been hitting more and more to people. This paper proposes a novel U-Net Convolutional Neural Network to classify and segment COVID-19 lung CT images, which contains Sub Coding Block (SCB), Atrous Spatial Pyramid Pooling(ASPP) and Attention Gate(AG). Three different models such as FCN, U-Net and U-Net-SCB are designed to compare the proposed model and the best optimizer and atrous rate are chosen for the proposed model. The simulation results show that the proposed U-Net-MMFE has the best Dice segmentation coefficient of 94.79% for the COVID-19 CT scan digital image dataset compared with other segmentation models when atrous rate is 12 and the optimizer is Adam.

MLSE-Net: Multi-level Semantic Enriched Network for Medical Image Segmentation

  • Di Gai;Heng Luo;Jing He;Pengxiang Su;Zheng Huang;Song Zhang;Zhijun Tu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.9
    • /
    • pp.2458-2482
    • /
    • 2023
  • Medical image segmentation techniques based on convolution neural networks indulge in feature extraction triggering redundancy of parameters and unsatisfactory target localization, which outcomes in less accurate segmentation results to assist doctors in diagnosis. In this paper, we propose a multi-level semantic-rich encoding-decoding network, which consists of a Pooling-Conv-Former (PCFormer) module and a Cbam-Dilated-Transformer (CDT) module. In the PCFormer module, it is used to tackle the issue of parameter explosion in the conservative transformer and to compensate for the feature loss in the down-sampling process. In the CDT module, the Cbam attention module is adopted to highlight the feature regions by blending the intersection of attention mechanisms implicitly, and the Dilated convolution-Concat (DCC) module is designed as a parallel concatenation of multiple atrous convolution blocks to display the expanded perceptual field explicitly. In addition, MultiHead Attention-DwConv-Transformer (MDTransformer) module is utilized to evidently distinguish the target region from the background region. Extensive experiments on medical image segmentation from Glas, SIIM-ACR, ISIC and LGG demonstrated that our proposed network outperforms existing advanced methods in terms of both objective evaluation and subjective visual performance.

Saliency-Assisted Collaborative Learning Network for Road Scene Semantic Segmentation

  • Haifeng Sima;Yushuang Xu;Minmin Du;Meng Gao;Jing Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.861-880
    • /
    • 2023
  • Semantic segmentation of road scene is the key technology of autonomous driving, and the improvement of convolutional neural network architecture promotes the improvement of model segmentation performance. The existing convolutional neural network has the simplification of learning knowledge and the complexity of the model. To address this issue, we proposed a road scene semantic segmentation algorithm based on multi-task collaborative learning. Firstly, a depthwise separable convolution atrous spatial pyramid pooling is proposed to reduce model complexity. Secondly, a collaborative learning framework is proposed involved with saliency detection, and the joint loss function is defined using homoscedastic uncertainty to meet the new learning model. Experiments are conducted on the road and nature scenes datasets. The proposed method achieves 70.94% and 64.90% mIoU on Cityscapes and PASCAL VOC 2012 datasets, respectively. Qualitatively, Compared to methods with excellent performance, the method proposed in this paper has significant advantages in the segmentation of fine targets and boundaries.