• Title/Summary/Keyword: Fully convolution network

Search Result 38, Processing Time 0.028 seconds

A Study on the Optimization of Convolution Operation Speed through FFT Algorithm (FFT 적용을 통한 Convolution 연산속도 향상에 관한 연구)

  • Lim, Su-Chang;Kim, Jong-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1552-1559
    • /
    • 2021
  • Convolution neural networks (CNNs) show notable performance in image processing and are used as representative core models. CNNs extract and learn features from large amounts of train dataset. In general, it has a structure in which a convolution layer and a fully connected layer are stacked. The core of CNN is the convolution layer. The size of the kernel used for feature extraction and the number that affect the depth of the feature map determine the amount of weight parameters of the CNN that can be learned. These parameters are the main causes of increasing the computational complexity and memory usage of the entire neural network. The most computationally expensive components in CNNs are fully connected and spatial convolution computations. In this paper, we propose a Fourier Convolution Neural Network that performs the operation of the convolution layer in the Fourier domain. We work on modifying and improving the amount of computation by applying the fast fourier transform method. Using the MNIST dataset, the performance was similar to that of the general CNN in terms of accuracy. In terms of operation speed, 7.2% faster operation speed was achieved. An average of 19% faster speed was achieved in experiments using 1024x1024 images and various sizes of kernels.

A Study on Improving Speed of Interesting Region Detection Based on Fully Convolutional Network (Fully Convolutional Network 기반 관심 영역 검출 기법의 속도 개선 연구)

  • Hwang, Hyun-Su;Jung, Jin-woo;Kim, Yong-Hwan;Choe, Yoon-Sik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.322-325
    • /
    • 2018
  • 영상의 관심 영역 검출은 영상처리 및 컴퓨터 비전 응용 분야에서 꾸준하게 사용되고 있는 기법이다. 특히, 근래 심층신경망 연구의 급격한 발전에 힘입어 심층신경망을 이용한 관심 영역 검출 기법에 대한 연구가 활발하게 진행되고 있다. 한편 Fully Convolutional Network(이하 FCN)은 본래 심층 예측(Dense Prediction)을 통한 의미론적 영상 분할(Semantic Segmentation)을 수행하기 위해 제안된 심층신경망 구조이다. FCN을 영상의 관심 영역 검출에 활용하여도 기존 관심 영역 검출 기법과 비교하여 충분히 좋은 성능을 발휘할 수 있다. 그러나 FCN에 사용되는 convolution 층의 수가 많고, 이에 따른 가중치(weight)의 개수도 기하급수적으로 늘어나 검출에 필요한 시간 복잡도가 매우 크다는 문제점이 있다. 따라서 본 논문에서는 기존 FCN이 가진 검출 시간 복잡도의 문제점을 convolution 층의 가중치 관점에서 해결하고자 이를 조절하여 FCN의 관심 영역 검출 속도를 향상시키는 방법을 제안한다. 적절한 convolution 층의 가중치를 조절함으로써, MSRA10K 데이터셋 환경에서 검출 정확도를 크게 저하시키지 않고도 최대 약 20.5%만큼 검출 속도를 향상시킬 수 있었다.

  • PDF

Deep Learning Based Gray Image Generation from 3D LiDAR Reflection Intensity (딥러닝 기반 3차원 라이다의 반사율 세기 신호를 이용한 흑백 영상 생성 기법)

  • Kim, Hyun-Koo;Yoo, Kook-Yeol;Park, Ju H.;Jung, Ho-Youl
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.14 no.1
    • /
    • pp.1-9
    • /
    • 2019
  • In this paper, we propose a method of generating a 2D gray image from LiDAR 3D reflection intensity. The proposed method uses the Fully Convolutional Network (FCN) to generate the gray image from 2D reflection intensity which is projected from LiDAR 3D intensity. Both encoder and decoder of FCN are configured with several convolution blocks in the symmetric fashion. Each convolution block consists of a convolution layer with $3{\times}3$ filter, batch normalization layer and activation function. The performance of the proposed method architecture is empirically evaluated by varying depths of convolution blocks. The well-known KITTI data set for various scenarios is used for training and performance evaluation. The simulation results show that the proposed method produces the improvements of 8.56 dB in peak signal-to-noise ratio and 0.33 in structural similarity index measure compared with conventional interpolation methods such as inverse distance weighted and nearest neighbor. The proposed method can be possibly used as an assistance tool in the night-time driving system for autonomous vehicles.

Improved Sliding Shapes for Instance Segmentation of Amodal 3D Object

  • Lin, Jinhua;Yao, Yu;Wang, Yanjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5555-5567
    • /
    • 2018
  • State-of-art instance segmentation networks are successful at generating 2D segmentation mask for region proposals with highest classification score, yet 3D object segmentation task is limited to geocentric embedding or detector of Sliding Shapes. To this end, we propose an amodal 3D instance segmentation network called A3IS-CNN, which extends the detector of Deep Sliding Shapes to amodal 3D instance segmentation by adding a new branch of 3D ConvNet called A3IS-branch. The A3IS-branch which takes 3D amodal ROI as input and 3D semantic instances as output is a fully convolution network(FCN) sharing convolutional layers with existing 3d RPN which takes 3D scene as input and 3D amodal proposals as output. For two branches share computation with each other, our 3D instance segmentation network adds only a small overhead of 0.25 fps to Deep Sliding Shapes, trading off accurate detection and point-to-point segmentation of instances. Experiments show that our 3D instance segmentation network achieves at least 10% to 50% improvement over the state-of-art network in running time, and outperforms the state-of-art 3D detectors by at least 16.1 AP.

Design of new CNN structure with internal FC layer (내부 FC층을 갖는 새로운 CNN 구조의 설계)

  • Park, Hee-mun;Park, Sung-chan;Hwang, Kwang-bok;Choi, Young-kiu;Park, Jin-hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.466-467
    • /
    • 2018
  • Recently, artificial intelligence has been applied to various fields such as image recognition, image recognition speech recognition, and natural language processing, and interest in Deep Learning technology is increasing. Many researches on Convolutional Neural Network(CNN), which is one of the most representative algorithms among Deep Learning, have strong advantages in image recognition and classification and are widely used in various fields. In this paper, we propose a new network structure that transforms the general CNN structure. A typical CNN structure consists of a convolution layer, ReLU layer, and a pooling layer. Therefore in this paper, We intend to construct a new network by adding fully connected layer inside a general CNN structure. This modification is intended to increase the learning and accuracy of the convoluted image by including the generalization which is an advantage of the neural network.

  • PDF

Multi-focus Image Fusion using Fully Convolutional Two-stream Network for Visual Sensors

  • Xu, Kaiping;Qin, Zheng;Wang, Guolong;Zhang, Huidi;Huang, Kai;Ye, Shuxiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2253-2272
    • /
    • 2018
  • We propose a deep learning method for multi-focus image fusion. Unlike most existing pixel-level fusion methods, either in spatial domain or in transform domain, our method directly learns an end-to-end fully convolutional two-stream network. The framework maps a pair of different focus images to a clean version, with a chain of convolutional layers, fusion layer and deconvolutional layers. Our deep fusion model has advantages of efficiency and robustness, yet demonstrates state-of-art fusion quality. We explore different parameter settings to achieve trade-offs between performance and speed. Moreover, the experiment results on our training dataset show that our network can achieve good performance with subjective visual perception and objective assessment metrics.

Implementation of handwritten digit recognition CNN structure using GPGPU and Combined Layer (GPGPU와 Combined Layer를 이용한 필기체 숫자인식 CNN구조 구현)

  • Lee, Sangil;Nam, Kihun;Jung, Jun Mo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.3 no.4
    • /
    • pp.165-169
    • /
    • 2017
  • CNN(Convolutional Nerual Network) is one of the algorithms that show superior performance in image recognition and classification among machine learning algorithms. CNN is simple, but it has a large amount of computation and it takes a lot of time. Consequently, in this paper we performed an parallel processing unit for the convolution layer, pooling layer and the fully connected layer, which consumes a lot of handling time in the process of CNN, through the SIMT(Single Instruction Multiple Thread)'s structure of GPGPU(General-Purpose computing on Graphics Processing Units).And we also expect to improve performance by reducing the number of memory accesses and directly using the output of convolution layer not storing it in pooling layer. In this paper, we use MNIST dataset to verify this experiment and confirm that the proposed CNN structure is 12.38% better than existing structure.

Deconvolution Pixel Layer Based Semantic Segmentation for Street View Images (디컨볼루션 픽셀층 기반의 도로 이미지의 의미론적 분할)

  • Wahid, Abdul;Lee, Hyo Jong
    • Annual Conference of KIPS
    • /
    • 2019.05a
    • /
    • pp.515-518
    • /
    • 2019
  • Semantic segmentation has remained as a challenging problem in the field of computer vision. Given the immense power of Convolution Neural Network (CNN) models, many complex problems have been solved in computer vision. Semantic segmentation is the challenge of classifying several pixels of an image into one category. With the help of convolution neural networks, we have witnessed prolific results over the time. We propose a convolutional neural network model which uses Fully CNN with deconvolutional pixel layers. The goal is to create a hierarchy of features while the fully convolutional model does the primary learning and later deconvolutional model visually segments the target image. The proposed approach creates a direct link among the several adjacent pixels in the resulting feature maps. It also preserves the spatial features such as corners and edges in images and hence adding more accuracy to the resulting outputs. We test our algorithm on Karlsruhe Institute of Technology and Toyota Technologies Institute (KITTI) street view data set. Our method achieves an mIoU accuracy of 92.04 %.

A Study on the Accuracy Improvement of Movie Recommender System Using Word2Vec and Ensemble Convolutional Neural Networks (Word2Vec과 앙상블 합성곱 신경망을 활용한 영화추천 시스템의 정확도 개선에 관한 연구)

  • Kang, Boo-Sik
    • Journal of Digital Convergence
    • /
    • v.17 no.1
    • /
    • pp.123-130
    • /
    • 2019
  • One of the most commonly used methods of web recommendation techniques is collaborative filtering. Many studies on collaborative filtering have suggested ways to improve accuracy. This study proposes a method of movie recommendation using Word2Vec and an ensemble convolutional neural networks. First, in the user, movie, and rating information, construct the user sentences and movie sentences. It inputs user sentences and movie sentences into Word2Vec to obtain user vectors and movie vectors. User vectors are entered into user convolution model and movie vectors are input to movie convolution model. The user and the movie convolution models are linked to a fully connected neural network model. Finally, the output layer of the fully connected neural network outputs forecasts of user movie ratings. Experimentation results showed that the accuracy of the technique proposed in this study accuracy of conventional collaborative filtering techniques was improved compared to those of conventional collaborative filtering technique and the technique using Word2Vec and deep neural networks proposed in a similar study.

Multimode-fiber Speckle Image Reconstruction Based on Multiscale Convolution and a Multidimensional Attention Mechanism

  • Kai Liu;Leihong Zhang;Runchu Xu;Dawei Zhang;Haima Yang;Quan Sun
    • Current Optics and Photonics
    • /
    • v.8 no.5
    • /
    • pp.463-471
    • /
    • 2024
  • Multimode fibers (MMFs) possess high information throughput and small core diameter, making them highly promising for applications such as endoscopy and communication. However, modal dispersion hinders the direct use of MMFs for image transmission. By training neural networks on time-series waveforms collected from MMFs it is possible to reconstruct images, transforming blurred speckle patterns into recognizable images. This paper proposes a fully convolutional neural-network model, MSMDFNet, for image restoration in MMFs. The network employs an encoder-decoder architecture, integrating multiscale convolutional modules in the decoding layers to enhance the receptive field for feature extraction. Additionally, attention mechanisms are incorporated from both spatial and channel dimensions, to improve the network's feature-perception capabilities. The algorithm demonstrates excellent performance on MNIST and Fashion-MNIST datasets collected through MMFs, showing significant improvements in various metrics such as SSIM.