• Title/Summary/Keyword: Parallel convolutional neural network

Search Result 29, Processing Time 0.026 seconds

Efficient Implementation of Convolutional Neural Network Using CUDA (CUDA를 이용한 Convolutional Neural Network의 효율적인 구현)

  • Ki, Cheol-Min;Cho, Tai-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.6
    • /
    • pp.1143-1148
    • /
    • 2017
  • Currently, Artificial Intelligence and Deep Learning are rising as hot social issues, and these technologies are applied to various fields. A good method among the various algorithms in Artificial Intelligence is Convolutional Neural Networks. Convolutional Neural Network is a form that adds Convolution Layers to Multi Layer Neural Network. If you use Convolutional Neural Networks for small amount of data, or if the structure of layers is not complicated, you don't have to pay attention to speed. But the learning should take long time when the size of the learning data is large and the structure of layers is complicated. In these cases, GPU-based parallel processing is frequently needed. In this paper, we developed Convolutional Neural Networks using CUDA, and show that its learning is faster and more efficient than learning using some other frameworks or programs.

An Implementation of a Convolutional Accelerator based on a GPGPU for a Deep Learning (Deep Learning을 위한 GPGPU 기반 Convolution 가속기 구현)

  • Jeon, Hee-Kyeong;Lee, Kwang-yeob;Kim, Chi-yong
    • Journal of IKEEE
    • /
    • v.20 no.3
    • /
    • pp.303-306
    • /
    • 2016
  • In this paper, we propose a method to accelerate convolutional neural network by utilizing a GPGPU. Convolutional neural network is a sort of the neural network learning features of images. Convolutional neural network is suitable for the image processing required to learn a lot of data such as images. The convolutional layer of the conventional CNN required a large number of multiplications and it is difficult to operate in the real-time on the embedded environment. In this paper, we reduce the number of multiplications through Winograd convolution operation and perform parallel processing of the convolution by utilizing SIMT-based GPGPU. The experiment was conducted using ModelSim and TestDrive, and the experimental results showed that the processing time was improved by about 17%, compared to the conventional convolution.

Development and Speed Comparison of Convolutional Neural Network Using CUDA (CUDA를 이용한 Convolutional Neural Network의 구현 및 속도 비교)

  • Ki, Cheol-min;Cho, Tai-Hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.335-338
    • /
    • 2017
  • Currently Artificial Inteligence and Deep Learning are social issues, and These technologies are applied to various fields. A good method among the various algorithms in Artificial Inteligence is Convolutional Neural Network. Convolutional Neural Network is a form that adds convolution layers that extracts features by convolution operation on a general neural network method. If you use Convolutional Neural Network as small amount of data, or if the structure of layers is not complicated, you don't have to pay attention to speed. But the learning time is long as the size of the learning data is large and the structure of layers is complicated. So, GPU-based parallel processing is a lot. In this paper, we developed Convolutional Neural Network using CUDA and Learning speed is faster and more efficient than the method using the CPU.

  • PDF

Graph Convolutional - Network Architecture Search : Network architecture search Using Graph Convolution Neural Networks (그래프 합성곱-신경망 구조 탐색 : 그래프 합성곱 신경망을 이용한 신경망 구조 탐색)

  • Su-Youn Choi;Jong-Youel Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.649-654
    • /
    • 2023
  • This paper proposes the design of a neural network structure search model using graph convolutional neural networks. Deep learning has a problem of not being able to verify whether the designed model has a structure with optimized performance due to the nature of learning as a black box. The neural network structure search model is composed of a recurrent neural network that creates a model and a convolutional neural network that is the generated network. Conventional neural network structure search models use recurrent neural networks, but in this paper, we propose GC-NAS, which uses graph convolutional neural networks instead of recurrent neural networks to create convolutional neural network models. The proposed GC-NAS uses the Layer Extraction Block to explore depth, and the Hyper Parameter Prediction Block to explore spatial and temporal information (hyper parameters) based on depth information in parallel. Therefore, since the depth information is reflected, the search area is wider, and the purpose of the search area of the model is clear by conducting a parallel search with depth information, so it is judged to be superior in theoretical structure compared to GC-NAS. GC-NAS is expected to solve the problem of the high-dimensional time axis and the range of spatial search of recurrent neural networks in the existing neural network structure search model through the graph convolutional neural network block and graph generation algorithm. In addition, we hope that the GC-NAS proposed in this paper will serve as an opportunity for active research on the application of graph convolutional neural networks to neural network structure search.

A Method for accelerating training of Convolutional Neural Network (합성곱 신경망의 학습 가속화를 위한 방법)

  • Choi, Se Jin;Jung, Jun Mo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.3 no.4
    • /
    • pp.171-175
    • /
    • 2017
  • Recently, Training of the convolutional neural network (CNN) entails many iterative computations. Therefore, a method of accelerating the training speed through parallel processing using the hardware specifications of GPGPU is actively researched. In this paper, the operations of the feature extraction unit and the classification unit are divided into blocks and threads of GPGPU and processed in parallel. Convolution and Pooling operations of the feature extraction unit are processed in parallel at once without sequentially processing. As a result, proposed method improved the training time about 314%.

A Parallel Deep Convolutional Neural Network for Alzheimer's disease classification on PET/CT brain images

  • Baydargil, Husnu Baris;Park, Jangsik;Kang, Do-Young;Kang, Hyun;Cho, Kook
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3583-3597
    • /
    • 2020
  • In this paper, a parallel deep learning model using a convolutional neural network and a dilated convolutional neural network is proposed to classify Alzheimer's disease with high accuracy in PET/CT images. The developed model consists of two pipelines, a conventional CNN pipeline, and a dilated convolution pipeline. An input image is sent through both pipelines, and at the end of both pipelines, extracted features are concatenated and used for classifying Alzheimer's disease. Complimentary abilities of both networks provide better overall accuracy than single conventional CNNs in the dataset. Moreover, instead of performing binary classification, the proposed model performs three-class classification being Alzheimer's disease, mild cognitive impairment, and normal control. Using the data received from Dong-a University, the model performs classification detecting Alzheimer's disease with an accuracy of up to 95.51%.

Performance comparison of lung sound classification using various convolutional neural networks (다양한 합성곱 신경망 방식을 이용한 폐음 분류 방식의 성능 비교)

  • Kim, Gee Yeun;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.5
    • /
    • pp.568-573
    • /
    • 2019
  • In the diagnosis of pulmonary diseases, auscultation technique is simpler than the other methods, and lung sounds can be used for predicting the types of pulmonary diseases as well as identifying patients with pulmonary diseases. Therefore, in this paper, we identify patients with pulmonary diseases and classify lung sounds according to their sound characteristics using various convolutional neural networks, and compare the classification performance of each neural network method. First, lung sounds over affected areas of the chest with pulmonary diseases are collected by using a single-channel lung sound recording device, and spectral features are extracted from the collected sounds in time domain and applied to each neural network. As classification methods, we use general, parallel, and residual convolutional neural network, and compare lung sound classification performance of each neural network through experiments.

A New Face Morphing Method using Texture Feature-based Control Point Selection Algorithm and Parallel Deep Convolutional Neural Network (텍스처 특징 기반 제어점 선택 알고리즘과 병렬 심층 컨볼루션 신경망을 이용한 새로운 얼굴 모핑 방법)

  • Park, Jin Hyeok;Khan, Rafiul Hasan;Lim, Seon-Ja;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.176-188
    • /
    • 2022
  • In this paper, we propose a compact method for anthropomorphism that uses Deep Convolutional Neural Networks (DCNN) to detect the similarities between a human face and an animal face. We also apply texture feature-based morphing between them. We propose a basic texture feature-based morphing system for morphing between human faces only. The entire anthropomorphism process starts with the creation of an animal face classifier using a parallel DCNN that determines the most similar animal face to a given human face. The significance of our network is that it contains four sets of convolutional functions that run in parallel, allowing it to extract more features than a linear DCNN network. Our employed texture feature algorithm-based automatic morphing system recognizes the facial features of the human face and takes the Control Points automatically, rather than the traditional human aiding manual morphing system, once the similarity was established. The simulation results show that our suggested DCNN surpasses its competitors with a 92.0% accuracy rate. It also ensures that the most similar animal classes are found, and the texture-based morphing technology automatically completes the morphing process, ensuring a smooth transition from one image to another.

Text Classification Using Parallel Word-level and Character-level Embeddings in Convolutional Neural Networks

  • Geonu Kim;Jungyeon Jang;Juwon Lee;Kitae Kim;Woonyoung Yeo;Jong Woo Kim
    • Asia pacific journal of information systems
    • /
    • v.29 no.4
    • /
    • pp.771-788
    • /
    • 2019
  • Deep learning techniques such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) show superior performance in text classification than traditional approaches such as Support Vector Machines (SVMs) and Naïve Bayesian approaches. When using CNNs for text classification tasks, word embedding or character embedding is a step to transform words or characters to fixed size vectors before feeding them into convolutional layers. In this paper, we propose a parallel word-level and character-level embedding approach in CNNs for text classification. The proposed approach can capture word-level and character-level patterns concurrently in CNNs. To show the usefulness of proposed approach, we perform experiments with two English and three Korean text datasets. The experimental results show that character-level embedding works better in Korean and word-level embedding performs well in English. Also the experimental results reveal that the proposed approach provides better performance than traditional CNNs with word-level embedding or character-level embedding in both Korean and English documents. From more detail investigation, we find that the proposed approach tends to perform better when there is relatively small amount of data comparing to the traditional embedding approaches.

Parallel Multi-task Cascade Convolution Neural Network Optimization Algorithm for Real-time Dynamic Face Recognition

  • Jiang, Bin;Ren, Qiang;Dai, Fei;Zhou, Tian;Gui, Guan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.4117-4135
    • /
    • 2020
  • Due to the angle of view, illumination and scene diversity, real-time dynamic face detection and recognition is no small difficulty in those unrestricted environments. In this study, we used the intrinsic correlation between detection and calibration, using a multi-task cascaded convolutional neural network(MTCNN) to improve the efficiency of face recognition, and the output of each core network is mapped in parallel to a compact Euclidean space, where distance represents the similarity of facial features, so that the target face can be identified as quickly as possible, without waiting for all network iteration calculations to complete the recognition results. And after the angle of the target face and the illumination change, the correlation between the recognition results can be well obtained. In the actual application scenario, we use a multi-camera real-time monitoring system to perform face matching and recognition using successive frames acquired from different angles. The effectiveness of the method was verified by several real-time monitoring experiments, and good results were obtained.