• Title/Summary/Keyword: convolution methods

Search Result 269, Processing Time 0.024 seconds

Guess-then-Reduce Methods for Convolution Modular Lattices (순환 법 격자에 대한 추정 후 축소 기법)

  • Han Daewan;Hong Jin;Yeom Yongjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.15 no.2
    • /
    • pp.95-103
    • /
    • 2005
  • Convolution modular lattices appeared in the analysis of NTRU public key cryptosystem. We present three guess-then-reduce methods on convolution modular lattices, and apply them to practical parameters of NTRU. For the present our methods don't affect significantly the security of them. However, Hey have room for improvement and can be used to estimate mole closely the security of systems related to convolution modular lattices.

Real-Tim Sound Field Effect Implementation Using Block Filtering and QFT (Block Filtering과 QFT를 이용한 실시간 음장 효과구현)

  • Sohn Sung-Yong;Seo Jeongil;Hahn Minsoo
    • MALSORI
    • /
    • no.51
    • /
    • pp.85-98
    • /
    • 2004
  • It is almost impossible to generate the sound field effect in real time with the time-domain linear convolution because of its large multiplication operation requirement. To solve this, three methods are introduced to reduce the number of multiplication operations in this paper. Firstly, the time-domain linear convolution is replaced with the frequency-domain circular convolution. In other words, the linear convolution result can be derived from that of the circular convolution. This technique reduces the number of multiplication operations remarkably, Secondly, a subframe concept is introduced, i.e., one original frame is divided into several subframes. Then the FFT is executed for each subframe and, as a result, the number of multiplication operations can be reduced. Finally, the QFT is used in stead of the FFT. By combining all the above three methods into our final the SFE generation algorithm, the number of computations are reduced sufficiently and the real-time SFE generation becomes possible with a general PC.

  • PDF

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

Discrete singular convolution for buckling analyses of plates and columns

  • Civalek, Omer;Yavas, Altug
    • Structural Engineering and Mechanics
    • /
    • v.29 no.3
    • /
    • pp.279-288
    • /
    • 2008
  • In the present study, the discrete singular convolution (DSC) method is developed for buckling analysis of columns and thin plates having different geometries. Regularized Shannon's delta (RSD) kernel is selected as singular convolution to illustrate the present algorithm. In the proposed approach, the derivatives in both the governing equations and the boundary conditions are discretized by the method of DSC. The results obtained by DSC method were compared with those obtained by the other numerical and analytical methods.

Traffic Flow Prediction Model Based on Spatio-Temporal Dilated Graph Convolution

  • Sun, Xiufang;Li, Jianbo;Lv, Zhiqiang;Dong, Chuanhao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3598-3614
    • /
    • 2020
  • With the increase of motor vehicles and tourism demand, some traffic problems gradually appear, such as traffic congestion, safety accidents and insufficient allocation of traffic resources. Facing these challenges, a model of Spatio-Temporal Dilated Convolutional Network (STDGCN) is proposed for assistance of extracting highly nonlinear and complex characteristics to accurately predict the future traffic flow. In particular, we model the traffic as undirected graphs, on which graph convolutions are built to extract spatial feature informations. Furthermore, a dilated convolution is deployed into graph convolution for capturing multi-scale contextual messages. The proposed STDGCN integrates the dilated convolution into the graph convolution, which realizes the extraction of the spatial and temporal characteristics of traffic flow data, as well as features of road occupancy. To observe the performance of the proposed model, we compare with it with four rivals. We also employ four indicators for evaluation. The experimental results show STDGCN's effectiveness. The prediction accuracy is improved by 17% in comparison with the traditional prediction methods on various real-world traffic datasets.

Fast Convolution Method using Psycho-acoustic Filters in Sound Reverberator (잔향 생성기에서 심리 음향 필터를 이용한 고속 컨벌루션 방법)

  • Shin, Min-Cheol;Wang, Se-Myung
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2007.11a
    • /
    • pp.1037-1041
    • /
    • 2007
  • With the advent of sound field simulator, many sound fields have been reproduced by obtaining the impulse responses of specific acoustic spaces like famous concert hall, opera house. This sound field reproduction has been done by the linear convolution operation between the sound input signal and the impulse response of certain acoustic space. However, the conventional finite impulse response based linear convolution operation always makes real-time implementation of sound field generator impossible due to the large amount of computational burden. This paper introduces the fast convolution method using perceptual redundancy in the processed signals, input audio signal and room impulse response. Temporal and spectral psycho-acoustic filters considering masking effects are implemented in the proposed convolution structure. It reduces the computational burden of convolution methods for realtime implementation of a sound field generator. The conventional convolutions are compared with the proposed one in views of computational burden and sound quality. In the proposed method, a considerable reduction in the computational burden was realized with acceptable changes in sound quality.

  • PDF

Fast Convolution Method Using Real-time Masking Effects in Sound Reverberator (잔향 생성기에서 실시간 마스킹 효과를 이용한 고속 컨벌루션 방법)

  • Shin, Min-Cheol;Wang, Se-Myung
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.18 no.2
    • /
    • pp.231-237
    • /
    • 2008
  • With the advent of sound field simulator, many sound fields have been reproduced by obtaining the impulse responses of specific acoustic spaces like famous concert hall, opera house. This sound field reproduction has been done by the linear convolution operation between the sound input signal and the impulse response of certain acoustic space. However, the conventional finite impulse response based linear convolution operation always makes real-time implementation of sound field generator impossible due to the large amount of computational burden. This paper introduces the fast convolution method using perceptual redundancy in the processed signals, input audio signal and room impulse response. Temporal and spectral real-time masking blocks are implemented in the proposed convolution structure. It reduces the computational burden of convolution methods for real-time implementation of a sound field generator. The conventional convolutions are compared with the proposed one in views of computational burden and sound quality. In the proposed method, a considerable reduction in the computational burden was realized with acceptable changes in sound quality.

Modified Cubic Convolution Interpolation for Low Computational Complexity

  • Jun, Young-Hyun;Yun, Jong-Ho;Choi, Myung-Ryul
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2006.08a
    • /
    • pp.1259-1262
    • /
    • 2006
  • In this paper, we propose a modified cubic convolution interpolation for the enlargement or reduction of digital images using a pixel difference value. The proposed method has a low complexity: the number of multiplier of weighted value to calculate one pixel of a scaled image has seven less than that of cubic convolution interpolation has sixteen. We use the linear function of the cubic convolution and the difference pixel value for selecting interpolation methods. The proposed method is compared with the conventional one for the computational complexity and the image quality. The simulation results show that the proposed method has less computational complexity than one of the cubic convolution interpolation.

  • PDF

Depth Map Extraction from the Single Image Using Pix2Pix Model (Pix2Pix 모델을 활용한 단일 영상의 깊이맵 추출)

  • Gang, Su Myung;Lee, Joon Jae
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.5
    • /
    • pp.547-557
    • /
    • 2019
  • To extract the depth map from a single image, a number of CNN-based deep learning methods have been performed in recent research. In this study, the GAN structure of Pix2Pix is maintained. this model allows to converge well, because it has the structure of the generator and the discriminator. But the convolution in this model takes a long time to compute. So we change the convolution form in the generator to a depthwise convolution to improve the speed while preserving the result. Thus, the seven down-sizing convolutional hidden layers in the generator U-Net are changed to depthwise convolution. This type of convolution decreases the number of parameters, and also speeds up computation time. The proposed model shows similar depth map prediction results as in the case of the existing structure, and the computation time in case of a inference is decreased by 64%.

A Proposal of Shuffle Graph Convolutional Network for Skeleton-based Action Recognition

  • Jang, Sungjun;Bae, Han Byeol;Lee, HeanSung;Lee, Sangyoun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.4
    • /
    • pp.314-322
    • /
    • 2021
  • Skeleton-based action recognition has attracted considerable attention in human action recognition. Recent methods for skeleton-based action recognition employ spatiotemporal graph convolutional networks (GCNs) and have remarkable performance. However, most of them have heavy computational complexity for robust action recognition. To solve this problem, we propose a shuffle graph convolutional network (SGCN) which is a lightweight graph convolutional network using pointwise group convolution rather than pointwise convolution to reduce computational cost. Our SGCN is composed of spatial and temporal GCN. The spatial shuffle GCN contains pointwise group convolution and part shuffle module which enhances local and global information between correlated joints. In addition, the temporal shuffle GCN contains depthwise convolution to maintain a large receptive field. Our model achieves comparable performance with lowest computational cost and exceeds the performance of baseline at 0.3% and 1.2% on NTU RGB+D and NTU RGB+D 120 datasets, respectively.