• 제목/요약/키워드: Atrous Convolution

검색결과 7건 처리시간 0.019초

Atrous Convolution과 Grad-CAM을 통한 손 끝 탐지 (Fingertip Detection through Atrous Convolution and Grad-CAM)

  • 노대철;김태영
    • 한국컴퓨터그래픽스학회논문지
    • /
    • 제25권5호
    • /
    • pp.11-20
    • /
    • 2019
  • 딥러닝 기술의 발전으로 가상 현실이나 증강 현실 응용에서 사용하기 적절한 사용자 친화적 인터페이스에 관한 연구가 활발히 이뤄지고 있다. 본 논문은 사용자의 손을 이용한 인터페이스를 지원하기 위하여 손 끝 좌표를 추적하여 가상의 객체를 선택하거나, 공중에 글씨나 그림을 작성하는 행위가 가능하도록 딥러닝 기반 손 끝 객체 탐지 방법을 제안한다. 입력 영상에서 Grad-CAM으로 해당 손 끝 객체의 대략적인 부분을 잘라낸 후, 잘라낸 영상에 대하여 Atrous Convolution을 이용한 합성곱 신경망을 수행하여 손 끝의 위치를 찾는다. 본 방법은 객체의 주석 전처리 과정을 별도로 요구하지 않으면서 기존 객체 탐지 알고리즘 보다 간단하고 구현하기에 쉽다. 본 방법을 검증하기 위하여 Air-Writing 응용을 구현한 결과 평균 81%의 인식률과 76 ms 속도로 허공에서 지연 시간 없이 부드럽게 글씨 작성이 가능하여 실시간으로 활용 가능함을 알 수 있었다.

Pixel-based crack image segmentation in steel structures using atrous separable convolution neural network

  • Ta, Quoc-Bao;Pham, Quang-Quang;Kim, Yoon-Chul;Kam, Hyeon-Dong;Kim, Jeong-Tae
    • Structural Monitoring and Maintenance
    • /
    • 제9권3호
    • /
    • pp.289-303
    • /
    • 2022
  • In this study, the impact of assigned pixel labels on the accuracy of crack image identification of steel structures is examined by using an atrous separable convolution neural network (ASCNN). Firstly, images containing fatigue cracks collected from steel structures are classified into four datasets by assigning different pixel labels based on image features. Secondly, the DeepLab v3+ algorithm is used to determine optimal parameters of the ASCNN model by maximizing the average mean-intersection-over-union (mIoU) metric of the datasets. Thirdly, the ASCNN model is trained for various image sizes and hyper-parameters, such as the learning rule, learning rate, and epoch. The optimal parameters of the ASCNN model are determined based on the average mIoU metric. Finally, the trained ASCNN model is evaluated by using 10% untrained images. The result shows that the ASCNN model can segment cracks and other objects in the captured images with an average mIoU of 0.716.

Semantic crack-image identification framework for steel structures using atrous convolution-based Deeplabv3+ Network

  • Ta, Quoc-Bao;Dang, Ngoc-Loi;Kim, Yoon-Chul;Kam, Hyeon-Dong;Kim, Jeong-Tae
    • Smart Structures and Systems
    • /
    • 제30권1호
    • /
    • pp.17-34
    • /
    • 2022
  • For steel structures, fatigue cracks are critical damage induced by long-term cycle loading and distortion effects. Vision-based crack detection can be a solution to ensure structural integrity and performance by continuous monitoring and non-destructive assessment. A critical issue is to distinguish cracks from other features in captured images which possibly consist of complex backgrounds such as handwritings and marks, which were made to record crack patterns and lengths during periodic visual inspections. This study presents a parametric study on image-based crack identification for orthotropic steel bridge decks using captured images with complicated backgrounds. Firstly, a framework for vision-based crack segmentation using the atrous convolution-based Deeplapv3+ network (ACDN) is designed. Secondly, features on crack images are labeled to build three databanks by consideration of objects in the backgrounds. Thirdly, evaluation metrics computed from the trained ACDN models are utilized to evaluate the effects of obstacles on crack detection results. Finally, various training parameters, including image sizes, hyper-parameters, and the number of training images, are optimized for the ACDN model of crack detection. The result demonstrated that fatigue cracks could be identified by the trained ACDN models, and the accuracy of the crack-detection result was improved by optimizing the training parameters. It enables the applicability of the vision-based technique for early detecting tiny fatigue cracks in steel structures.

딥러닝 기반 거리 영상의 Semantic Segmentation을 위한 Atrous Residual U-Net (Atrous Residual U-Net for Semantic Segmentation in Street Scenes based on Deep Learning)

  • 신석용;이상훈;한현호
    • 융합정보논문지
    • /
    • 제11권10호
    • /
    • pp.45-52
    • /
    • 2021
  • 본 논문에서는 U-Net 기반의 semantic segmentation 방법에서 정확도를 개선하기 위한 Atrous Residual U-Net (AR-UNet)을 제안하였다. U-Net은 의료 영상 분석, 자율주행 자동차, 원격 감지 영상 등의 분야에서 주로 사용된다. 기존 U-Net은 인코더 부분에서 컨볼루션 계층 수가 적어 추출되는 특징이 부족하다. 추출된 특징은 객체의 범주를 분류하는 데 필수적이며, 부족할 경우 분할 정확도를 저하시키는 문제를 초래한다. 따라서 이 문제를 개선하기 위해 인코더에 residual learning과 ASPP를 활용한 AR-UNet을 제안하였다. Residual learning은 특징 추출 능력을 개선하고, 연속적인 컨볼루션으로 발생하는 특징 손실과 기울기 소실 문제 방지에 효과적이다. 또한 ASPP는 특징맵의 해상도를 줄이지 않고 추가적인 특징 추출이 가능하다. 실험은 Cityscapes 데이터셋으로 AR-UNet의 효과를 검증하였다. 실험 결과는 AR-UNet이 기존 U-Net과 비교하여 향상된 분할 결과를 보였다. 이를 통해 AR-UNet은 정확도가 중요한 여러 응용 분야의 발전에 기여할 수 있다.

COVID-19 폐 CT 이미지 인식 (COVID-19 Lung CT Image Recognition)

  • 수징제;김강철
    • 한국전자통신학회논문지
    • /
    • 제17권3호
    • /
    • pp.529-536
    • /
    • 2022
  • 지난 2년 동안 중증급성호흡기증후군 코로나바이러스-2(SARS-CoV-2)는 점점 더 많은 사람들에게 영향을 미치고 있다. 본 논문에서는 COVID-19 폐 CT 이미지를 분할하고 분류하기 위해서 서브코딩블록(SCB), 확장공간파라미드풀링(ASSP)와 어텐션게이트(AG)로 구성된 혼합 모드 특징 추출 방식의 새로운 U-Net 컨볼루션 신경망을 제안한다. 그리고 제안된 모델과 비교하기 위하여 FCN, U-Net, U-Net-SCB 모델을 설계한다. 제안된 U-Net-MMFE 는 COVID-19 CT 스캔 디지털 이미지 데이터에 대하여 atrous rate가 12이고, Adam 최적화 알고리즘을 사용할 때 다른 분할 모델에 비하여 94.79%의 우수한 주사위 분할 점수를 얻었다.

MLSE-Net: Multi-level Semantic Enriched Network for Medical Image Segmentation

  • Di Gai;Heng Luo;Jing He;Pengxiang Su;Zheng Huang;Song Zhang;Zhijun Tu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권9호
    • /
    • pp.2458-2482
    • /
    • 2023
  • Medical image segmentation techniques based on convolution neural networks indulge in feature extraction triggering redundancy of parameters and unsatisfactory target localization, which outcomes in less accurate segmentation results to assist doctors in diagnosis. In this paper, we propose a multi-level semantic-rich encoding-decoding network, which consists of a Pooling-Conv-Former (PCFormer) module and a Cbam-Dilated-Transformer (CDT) module. In the PCFormer module, it is used to tackle the issue of parameter explosion in the conservative transformer and to compensate for the feature loss in the down-sampling process. In the CDT module, the Cbam attention module is adopted to highlight the feature regions by blending the intersection of attention mechanisms implicitly, and the Dilated convolution-Concat (DCC) module is designed as a parallel concatenation of multiple atrous convolution blocks to display the expanded perceptual field explicitly. In addition, MultiHead Attention-DwConv-Transformer (MDTransformer) module is utilized to evidently distinguish the target region from the background region. Extensive experiments on medical image segmentation from Glas, SIIM-ACR, ISIC and LGG demonstrated that our proposed network outperforms existing advanced methods in terms of both objective evaluation and subjective visual performance.

Saliency-Assisted Collaborative Learning Network for Road Scene Semantic Segmentation

  • Haifeng Sima;Yushuang Xu;Minmin Du;Meng Gao;Jing Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권3호
    • /
    • pp.861-880
    • /
    • 2023
  • Semantic segmentation of road scene is the key technology of autonomous driving, and the improvement of convolutional neural network architecture promotes the improvement of model segmentation performance. The existing convolutional neural network has the simplification of learning knowledge and the complexity of the model. To address this issue, we proposed a road scene semantic segmentation algorithm based on multi-task collaborative learning. Firstly, a depthwise separable convolution atrous spatial pyramid pooling is proposed to reduce model complexity. Secondly, a collaborative learning framework is proposed involved with saliency detection, and the joint loss function is defined using homoscedastic uncertainty to meet the new learning model. Experiments are conducted on the road and nature scenes datasets. The proposed method achieves 70.94% and 64.90% mIoU on Cityscapes and PASCAL VOC 2012 datasets, respectively. Qualitatively, Compared to methods with excellent performance, the method proposed in this paper has significant advantages in the segmentation of fine targets and boundaries.