• Title/Summary/Keyword: convolution

Search Result 1,422, Processing Time 0.023 seconds

A Study on the Optimization of Convolution Operation Speed through FFT Algorithm (FFT 적용을 통한 Convolution 연산속도 향상에 관한 연구)

  • Lim, Su-Chang;Kim, Jong-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1552-1559
    • /
    • 2021
  • Convolution neural networks (CNNs) show notable performance in image processing and are used as representative core models. CNNs extract and learn features from large amounts of train dataset. In general, it has a structure in which a convolution layer and a fully connected layer are stacked. The core of CNN is the convolution layer. The size of the kernel used for feature extraction and the number that affect the depth of the feature map determine the amount of weight parameters of the CNN that can be learned. These parameters are the main causes of increasing the computational complexity and memory usage of the entire neural network. The most computationally expensive components in CNNs are fully connected and spatial convolution computations. In this paper, we propose a Fourier Convolution Neural Network that performs the operation of the convolution layer in the Fourier domain. We work on modifying and improving the amount of computation by applying the fast fourier transform method. Using the MNIST dataset, the performance was similar to that of the general CNN in terms of accuracy. In terms of operation speed, 7.2% faster operation speed was achieved. An average of 19% faster speed was achieved in experiments using 1024x1024 images and various sizes of kernels.

TRANSFORMS AND CONVOLUTIONS ON FUNCTION SPACE

  • Chang, Seung-Jun;Choi, Jae-Gil
    • Communications of the Korean Mathematical Society
    • /
    • v.24 no.3
    • /
    • pp.397-413
    • /
    • 2009
  • In this paper, for functionals of a generalized Brownian motion process, we show that the generalized Fourier-Feynman transform of the convolution product is a product of multiple transforms and that the conditional generalized Fourier-Feynman transform of the conditional convolution product is a product of multiple conditional transforms. This allows us to compute the (conditional) transform of the (conditional) convolution product without computing the (conditional) convolution product.

SHIFTING AND MODULATION FOR THE CONVOLUTION PRODUCT OF FUNCTIONALS IN A GENERALIZED FRESNEL CLASS

  • Kim, Byoung Soo;Park, Yeon Hee
    • Korean Journal of Mathematics
    • /
    • v.26 no.3
    • /
    • pp.387-403
    • /
    • 2018
  • Shifting, scaling and modulation proprerties for the convolution product of the Fourier-Feynman transform of functionals in a generalized Fresnel class ${\mathcal{F}}_{A1,A2}$ are given. These properties help us to obtain convolution product of new functionals from the convolution product of old functionals which we know their convolution product.

VISUALIZATION OF DISCRETE CONVOLUTION STRUCTURE USING TECHNOLOGY

  • Song, Keehong
    • Korean Journal of Mathematics
    • /
    • v.14 no.1
    • /
    • pp.35-46
    • /
    • 2006
  • The concept of convolution is a fundamental mathematical concept in a wide variety of disciplines and applications including probability, image processing, physics, and many more. The visualization of convolution for the continuous case is generally predetermined. On the other hand, the convolution structure embedded in the discrete case is often subtle and its visualization is non- trivial. This paper purports to develop the CAS techniques in visualizing the logical structure in the concept of a discrete convolution.

  • PDF

YEH CONVOLUTION OF WHITE NOISE FUNCTIONALS

  • Ji, Un Cig;Kim, Young Yi;Park, Yoon Jung
    • Journal of applied mathematics & informatics
    • /
    • v.31 no.5_6
    • /
    • pp.825-834
    • /
    • 2013
  • In this paper, we study the Yeh convolution of white noise functionals. We first introduce the notion of Yeh convolution of test white noise functionals and prove a dual property of the Yeh convolution. By applying the dual object of the Yeh convolution, we study the Yeh convolution of generalized white noise functionals, which is a non-trivial extension. Finally, we study relations between the Yeh convolution and Fourier-Gauss, Fourier-Mehler transform.

New Approach to Optimize the Size of Convolution Mask in Convolutional Neural Networks

  • Kwak, Young-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.1
    • /
    • pp.1-8
    • /
    • 2016
  • Convolutional neural network (CNN) consists of a few pairs of both convolution layer and subsampling layer. Thus it has more hidden layers than multi-layer perceptron. With the increased layers, the size of convolution mask ultimately determines the total number of weights in CNN because the mask is shared among input images. It also is an important learning factor which makes or breaks CNN's learning. Therefore, this paper proposes the best method to choose the convolution size and the number of layers for learning CNN successfully. Through our face recognition with vast learning examples, we found that the best size of convolution mask is 5 by 5 and 7 by 7, regardless of the number of layers. In addition, the CNN with two pairs of both convolution and subsampling layer is found to make the best performance as if the multi-layer perceptron having two hidden layers does.

Further Optimize MobileNetV2 with Channel-wise Squeeze and Excitation (채널간 압축과 해제를 통한 MobileNetV2 최적화)

  • Park, Jinho;Kim, Wonjun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.154-156
    • /
    • 2021
  • Depth-wise separable convolution 은 컴퓨터 자원이 제한된 환경에서 기존의 standard convolution을 대체하는데 강력하고, 효과적인 대안으로 잘 알려져 있다.[1] MobileNetV2 에서는 Inverted residual block을 소개한다. 이는 depth-wise separable convolution으로 인해 생기는 손실, 즉 channel 간의 데이터를 조합해 새로운 feature를 만들어낼 기회를 잃어버릴 때, 이를 depth-wise separable convolution 양단에 point-wise convolution(1×1 convolution)을 사용함으로써 극복해낸 block이다.[1] 하지만 1×1 convolution은 채널 수에 의존적(dependent)인 특징을 갖고 있고, 따라서 결국 네트워크가 깊어지면 깊어질수록 효율적이고(efficient) 가벼운(light weight) 네트워크를 만드는데 병목 현상(bottleneck)을 일으키고 만다. 이 논문에서는 channel-wise squeeze and excitation block(CSE)을 통해 1×1 convolution을 부분적으로 대체하는 방법을 통해 이 병목 현상을 해결한다.

  • PDF

Visualization of Convolution Operation Using Scalable Vector Graphics (SVG를 이용한 컨벌루션 연산의 시각화)

  • Kim, Yeong-Mi;Kang, Eui-Sung
    • The Journal of Korean Association of Computer Education
    • /
    • v.10 no.1
    • /
    • pp.97-105
    • /
    • 2007
  • In this paper, visualization of convolution operation is presented, which is implemented by scalable vector graphics (SVG). Convolution operation is one of the basic essential concepts in the area of signal and image processing. However, it is difficult for students to intuitively understand the operation of convolution since it is mainly based on mathematical representation. We present the visualization of convolution operation and its applications which are implemented by SVG. The effects of the proposed approach have been analyzed by interviews. It has been seen that the proposed visualization of convolution operation could be effectively applied to learn the convolution operation and its applications.

  • PDF

Sound Field Effect Implementation Using East Algorithm (고속 알고리즘을 이용한 음장 효과 구현)

  • Son Sung Young;Seo Joung Il;Hahn Minsoo
    • MALSORI
    • /
    • no.47
    • /
    • pp.85-96
    • /
    • 2003
  • It is difficult to implement sound field effect on real time using linear convolution in time domain because linear convolution needs much multiply operations. In this paper three ways is introduced to reduce multiplication operations. Firstly, linear convolution in time domain is replaced with circular convolution in frequency domain. It means that it operates multiplication in place of convolution. Secondly, one frame will be divided into several frames. It will reduce the multiplication operation in processing that transforms time domain into frequency domain. Finally, QFT will be used in place of FFT. Three ways result much reduction in multiplication operations. The reduction of the multiplication operation makes the real time implementation possible.

  • PDF

Robust Multi-Hump Convolution Input Shaper for Variation of Parameter (파라메터 변화에 강인한 Multi-Hump Convolution 입력성형기 설계)

  • Park, Un-Hwan;Lee, Jae-Won
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.18 no.5
    • /
    • pp.112-119
    • /
    • 2001
  • A variety of input shaper has been proposed to reduce the residual vibration of flexible structures. Multi-hump input shaper is known to be robust for parameter variations. However, existing approach should solve the more complicated nonlinear simultaneous equations to improve the robustness of the input shaper with the additional constraints. In this paper, by proposing a graphical approach which uses convolution of shaper, the multi-hump convolution input shaper could be designed even if the constraints are added for further robustness. With a mass-damper-spring model, the better performance is obtained using the proposed new multi-hump convolution input shaper.

  • PDF