• Title/Summary/Keyword: Feature pyramid network

Search Result 35, Processing Time 0.027 seconds

A Study on Lightweight Model with Attention Process for Efficient Object Detection (효율적인 객체 검출을 위해 Attention Process를 적용한 경량화 모델에 대한 연구)

  • Park, Chan-Soo;Lee, Sang-Hun;Han, Hyun-Ho
    • Journal of Digital Convergence
    • /
    • v.19 no.5
    • /
    • pp.307-313
    • /
    • 2021
  • In this paper, a lightweight network with fewer parameters compared to the existing object detection method is proposed. In the case of the currently used detection model, the network complexity has been greatly increased to improve accuracy. Therefore, the proposed network uses EfficientNet as a feature extraction network, and the subsequent layers are formed in a pyramid structure to utilize low-level detailed features and high-level semantic features. An attention process was applied between pyramid structures to suppress unnecessary noise for prediction. All computational processes of the network are replaced by depth-wise and point-wise convolutions to minimize the amount of computation. The proposed network was trained and evaluated using the PASCAL VOC dataset. The features fused through the experiment showed robust properties for various objects through a refinement process. Compared with the CNN-based detection model, detection accuracy is improved with a small amount of computation. It is considered necessary to adjust the anchor ratio according to the size of the object as a future study.

One-step deep learning-based method for pixel-level detection of fine cracks in steel girder images

  • Li, Zhihang;Huang, Mengqi;Ji, Pengxuan;Zhu, Huamei;Zhang, Qianbing
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.153-166
    • /
    • 2022
  • Identifying fine cracks in steel bridge facilities is a challenging task of structural health monitoring (SHM). This study proposed an end-to-end crack image segmentation framework based on a one-step Convolutional Neural Network (CNN) for pixel-level object recognition with high accuracy. To particularly address the challenges arising from small object detection in complex background, efforts were made in loss function selection aiming at sample imbalance and module modification in order to improve the generalization ability on complicated images. Specifically, loss functions were compared among alternatives including the Binary Cross Entropy (BCE), Focal, Tversky and Dice loss, with the last three specialized for biased sample distribution. Structural modifications with dilated convolution, Spatial Pyramid Pooling (SPP) and Feature Pyramid Network (FPN) were also performed to form a new backbone termed CrackDet. Models of various loss functions and feature extraction modules were trained on crack images and tested on full-scale images collected on steel box girders. The CNN model incorporated the classic U-Net as its backbone, and Dice loss as its loss function achieved the highest mean Intersection-over-Union (mIoU) of 0.7571 on full-scale pictures. In contrast, the best performance on cropped crack images was achieved by integrating CrackDet with Dice loss at a mIoU of 0.7670.

A Target Detection Algorithm based on Single Shot Detector (Single Shot Detector 기반 타깃 검출 알고리즘)

  • Feng, Yuanlin;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.358-361
    • /
    • 2021
  • In order to improve the accuracy of small target detection more effectively, this paper proposes an improved single shot detector (SSD) target detection and recognition method based on cspdarknet53, which introduces lightweight ECA attention mechanism and Feature Pyramid Network (FPN). First, the original SSD backbone network is replaced with cspdarknet53 to enhance the learning ability of the network. Then, a lightweight ECA attention mechanism is added to the basic convolution block to optimize the network. Finally, FPN is used to gradually fuse the multi-scale feature maps used for detection in the SSD from the deep to the shallow layers of the network to improve the positioning accuracy and classification accuracy of the network. Experiments show that the proposed target detection algorithm has better detection accuracy, and it improves the detection accuracy especially for small targets.

Depth Map Estimation Model Using 3D Feature Volume (3차원 특징볼륨을 이용한 깊이영상 생성 모델)

  • Shin, Soo-Yeon;Kim, Dong-Myung;Suh, Jae-Won
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.11
    • /
    • pp.447-454
    • /
    • 2018
  • This paper proposes a depth image generation algorithm of stereo images using a deep learning model composed of a CNN (convolutional neural network). The proposed algorithm consists of a feature extraction unit which extracts the main features of each parallax image and a depth learning unit which learns the parallax information using extracted features. First, the feature extraction unit extracts a feature map for each parallax image through the Xception module and the ASPP(Atrous spatial pyramid pooling) module, which are composed of 2D CNN layers. Then, the feature map for each parallax is accumulated in 3D form according to the time difference and the depth image is estimated after passing through the depth learning unit for learning the depth estimation weight through 3D CNN. The proposed algorithm estimates the depth of object region more accurately than other algorithms.

Single Shot Detector for Detecting Clickable Object in Mobile Device Screen (모바일 디바이스 화면의 클릭 가능한 객체 탐지를 위한 싱글 샷 디텍터)

  • Jo, Min-Seok;Chun, Hye-won;Han, Seong-Soo;Jeong, Chang-Sung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.29-34
    • /
    • 2022
  • We propose a novel network architecture and build dataset for recognizing clickable objects on mobile device screens. The data was collected based on clickable objects on the mobile device screen that have numerous resolution, and a total of 24,937 annotation data were subdivided into seven categories: text, edit text, image, button, region, status bar, and navigation bar. We use the Deconvolution Single Shot Detector as a baseline, the backbone network with Squeeze-and-Excitation blocks, the Single Shot Detector layer structure to derive inference results and the Feature pyramid networks structure. Also we efficiently extract features by changing the input resolution of the existing 1:1 ratio of the network to a 1:2 ratio similar to the mobile device screen. As a result of experimenting with the dataset we have built, the mean average precision was improved by up to 101% compared to baseline.

ASPPMVSNet: A high-receptive-field multiview stereo network for dense three-dimensional reconstruction

  • Saleh Saeed;Sungjun Lee;Yongju Cho;Unsang Park
    • ETRI Journal
    • /
    • v.44 no.6
    • /
    • pp.1034-1046
    • /
    • 2022
  • The learning-based multiview stereo (MVS) methods for three-dimensional (3D) reconstruction generally use 3D volumes for depth inference. The quality of the reconstructed depth maps and the corresponding point clouds is directly influenced by the spatial resolution of the 3D volume. Consequently, these methods produce point clouds with sparse local regions because of the lack of the memory required to encode a high volume of information. Here, we apply the atrous spatial pyramid pooling (ASPP) module in MVS methods to obtain dense feature maps with multiscale, long-range, contextual information using high receptive fields. For a given 3D volume with the same spatial resolution as that in the MVS methods, the dense feature maps from the ASPP module encoded with superior information can produce dense point clouds without a high memory footprint. Furthermore, we propose a 3D loss for training the MVS networks, which improves the predicted depth values by 24.44%. The ASPP module provides state-of-the-art qualitative results by constructing relatively dense point clouds, which improves the DTU MVS dataset benchmarks by 2.25% compared with those achieved in the previous MVS methods.

Enhancement of MSFC-Based Multi-Scale Features Compression Network with Bottom-UP MSFF in VCM (VCM 의 바텀-업 MSFF 를 이용한 MSFC 기반 멀티-스케일 특징 압축 네트워크 개선)

  • Dong-Ha Kim;Gyu-Woong Han;Jun-Seok Cha;Jae-Gon Kim
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.11a
    • /
    • pp.116-118
    • /
    • 2022
  • MPEG-VCM(Video Coding for Machine)은 입력된 이미지/비디오의 특징(feature)를 압축하는 Track 1 과 입력 이미지/비디오를 직접 압축하는 Track 2 로 나뉘어 표준화가 진행 중이다. 본 논문은 Track 1 의 비전임무 네트워크로 사용하는 Detectron2 의 FPN(Feature Pyramid Network)에서 추출한 멀티-스케일 특징을 효율적으로 압축하는 MSFC 기반의 압축 모델의 개선 기법을 제시한다. 제안기법은 해상도를 줄여서 단일-스케일 압축맵을 압축하는 기존의 압축 모델에서 저해상도 특징맵을 고해상도 특징맵에 바텀-업(Bottom-Up) 구조로 합성하여 단일-스케일 특징맵을 구성하는 바텀-업 MSFF 를 가지는 압축 모델을 제시한다. 제안방법은 기존의 모델 보다 BPP-mAP 성능에서 1 ~ 2.7%의 개선된 BD-rate 성능을 보이며 VCM 의 이미지 앵커(image anchor) 대비 최대 -85.94%의 BD-rate 성능향상을 보인다.

  • PDF

Research and Optimization of Face Detection Algorithm Based on MTCNN Model in Complex Environment (복잡한 환경에서 MTCNN 모델 기반 얼굴 검출 알고리즘 개선 연구)

  • Fu, Yumei;Kim, Minyoung;Jang, Jong-wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.50-56
    • /
    • 2020
  • With the rapid development of deep neural network theory and application research, the effect of face detection has been improved. However, due to the complexity of deep neural network calculation and the high complexity of the detection environment, how to detect face quickly and accurately becomes the main problem. This paper is based on the relatively simple model of the MTCNN model, using FDDB (Face Detection Dataset and Benchmark Homepage), LFW (Field Label Face) and FaceScrub public datasets as training samples. At the same time of sorting out and introducing MTCNN(Multi-Task Cascaded Convolutional Neural Network) model, it explores how to improve training speed and Increase performance at the same time. In this paper, the dynamic image pyramid technology is used to replace the traditional image pyramid technology to segment samples, and OHEM (the online hard example mine) function in MTCNN model is deleted in training, so as to improve the training speed.

Compression of Multiscale Features of FPN for VCM (VCM 을 위한 FPN 다중 스케일 특징 압축)

  • Kim, Dong-Ha;Yoon, Yong-Uk;Lee, Jooyoung;Jeong, Se-Yoon;Kim, Jae-Gon;Jeong, Dae-Gwon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.143-145
    • /
    • 2022
  • MPEG-VCM(Video Coding for Machine)은 입력된 비디오 특징(feature)를 압축하는 Track1 과 입력 영상을 직접 압축하는 Track2 로 나뉘어 표준화가 진행중이다. 본 논문은 VCM Track 1 에 해당하는 Detectron2 FPN(Feature Pyramid Network)에서 추출한 다중 스케일 특징맵을 VVC 로 압축하는 MSFC(Multi-Scale Feature Compression)을 구조를 제안한다. 본 논문의 MSFC 에서는 다중 스케일 특징을 결합하여 부호화/복호화하는 기존의 구조에서 특징맵의 해상도를 줄여 압축하는 개선된 MSFC 를 제시한다. 제안 방법은 VCM 의 Track2 의 영상 앵커(image anchor) 보다 우수한 BPP-mAP 성능을 보이고 최대 -84.98%의 BD-rate 성능향상을 보인다.

  • PDF

DP-LinkNet: A convolutional network for historical document image binarization

  • Xiong, Wei;Jia, Xiuhong;Yang, Dichun;Ai, Meihui;Li, Lirong;Wang, Song
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1778-1797
    • /
    • 2021
  • Document image binarization is an important pre-processing step in document analysis and archiving. The state-of-the-art models for document image binarization are variants of encoder-decoder architectures, such as FCN (fully convolutional network) and U-Net. Despite their success, they still suffer from three limitations: (1) reduced feature map resolution due to consecutive strided pooling or convolutions, (2) multiple scales of target objects, and (3) reduced localization accuracy due to the built-in invariance of deep convolutional neural networks (DCNNs). To overcome these three challenges, we propose an improved semantic segmentation model, referred to as DP-LinkNet, which adopts the D-LinkNet architecture as its backbone, with the proposed hybrid dilated convolution (HDC) and spatial pyramid pooling (SPP) modules between the encoder and the decoder. Extensive experiments are conducted on recent document image binarization competition (DIBCO) and handwritten document image binarization competition (H-DIBCO) benchmark datasets. Results show that our proposed DP-LinkNet outperforms other state-of-the-art techniques by a large margin. Our implementation and the pre-trained models are available at https://github.com/beargolden/DP-LinkNet.