• Title/Summary/Keyword: Improved Convolutional Neural Network

Search Result 171, Processing Time 0.034 seconds

An Implementation of a Convolutional Accelerator based on a GPGPU for a Deep Learning (Deep Learning을 위한 GPGPU 기반 Convolution 가속기 구현)

  • Jeon, Hee-Kyeong;Lee, Kwang-yeob;Kim, Chi-yong
    • Journal of IKEEE
    • /
    • v.20 no.3
    • /
    • pp.303-306
    • /
    • 2016
  • In this paper, we propose a method to accelerate convolutional neural network by utilizing a GPGPU. Convolutional neural network is a sort of the neural network learning features of images. Convolutional neural network is suitable for the image processing required to learn a lot of data such as images. The convolutional layer of the conventional CNN required a large number of multiplications and it is difficult to operate in the real-time on the embedded environment. In this paper, we reduce the number of multiplications through Winograd convolution operation and perform parallel processing of the convolution by utilizing SIMT-based GPGPU. The experiment was conducted using ModelSim and TestDrive, and the experimental results showed that the processing time was improved by about 17%, compared to the conventional convolution.

Human Motion Recognition Based on Spatio-temporal Convolutional Neural Network

  • Hu, Zeyuan;Park, Sange-yun;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.977-985
    • /
    • 2020
  • Aiming at the problem of complex feature extraction and low accuracy in human action recognition, this paper proposed a network structure combining batch normalization algorithm with GoogLeNet network model. Applying Batch Normalization idea in the field of image classification to action recognition field, it improved the algorithm by normalizing the network input training sample by mini-batch. For convolutional network, RGB image was the spatial input, and stacked optical flows was the temporal input. Then, it fused the spatio-temporal networks to get the final action recognition result. It trained and evaluated the architecture on the standard video actions benchmarks of UCF101 and HMDB51, which achieved the accuracy of 93.42% and 67.82%. The results show that the improved convolutional neural network has a significant improvement in improving the recognition rate and has obvious advantages in action recognition.

Two-Stream Convolutional Neural Network for Video Action Recognition

  • Qiao, Han;Liu, Shuang;Xu, Qingzhen;Liu, Shouqiang;Yang, Wanggan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3668-3684
    • /
    • 2021
  • Video action recognition is widely used in video surveillance, behavior detection, human-computer interaction, medically assisted diagnosis and motion analysis. However, video action recognition can be disturbed by many factors, such as background, illumination and so on. Two-stream convolutional neural network uses the video spatial and temporal models to train separately, and performs fusion at the output end. The multi segment Two-Stream convolutional neural network model trains temporal and spatial information from the video to extract their feature and fuse them, then determine the category of video action. Google Xception model and the transfer learning is adopted in this paper, and the Xception model which trained on ImageNet is used as the initial weight. It greatly overcomes the problem of model underfitting caused by insufficient video behavior dataset, and it can effectively reduce the influence of various factors in the video. This way also greatly improves the accuracy and reduces the training time. What's more, to make up for the shortage of dataset, the kinetics400 dataset was used for pre-training, which greatly improved the accuracy of the model. In this applied research, through continuous efforts, the expected goal is basically achieved, and according to the study and research, the design of the original dual-flow model is improved.

IPC-CNN: A Robust Solution for Precise Brain Tumor Segmentation Using Improved Privacy-Preserving Collaborative Convolutional Neural Network

  • Abdul Raheem;Zhen Yang;Haiyang Yu;Muhammad Yaqub;Fahad Sabah;Shahzad Ahmed;Malik Abdul Manan;Imran Shabir Chuhan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.9
    • /
    • pp.2589-2604
    • /
    • 2024
  • Brain tumors, characterized by uncontrollable cellular growths, are a significant global health challenge. Navigating the complexities of tumor identification due to their varied dimensions and positions, our research introduces enhanced methods for precise detection. Utilizing advanced learning techniques, we've improved early identification by preprocessing clinical dataset-derived images, augmenting them via a Generative Adversarial Network, and applying an Improved Privacy-Preserving Collaborative Convolutional Neural Network (IPC-CNN) for segmentation. Recognizing the critical importance of data security in today's digital era, our framework emphasizes the preservation of patient privacy. We evaluated the performance of our proposed model on the Figshare and BRATS 2018 datasets. By facilitating a collaborative model training environment across multiple healthcare institutions, we harness the power of distributed computing to securely aggregate model updates, ensuring individual data protection while leveraging collective expertise. Our IPC-CNN model achieved an accuracy of 99.40%, marking a notable advancement in brain tumor classification and offering invaluable insights for both the medical imaging and machine learning communities.

Implementation of Artificial Intelligence Computer Go Program Using a Convolutional Neural Network and Monte Carlo Tree Search (Convolutional Neural Network와 Monte Carlo Tree Search를 이용한 인공지능 바둑 프로그램의 구현)

  • Ki, Cheol-min;Cho, Tai-Hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.405-408
    • /
    • 2016
  • Games like Go, Chess, Janggi have helped to brain development of the people. These games are developed by computer program. And many algorithms have been developed to allow myself to play. The person winning chess program was developed in the 1990s. But game of go is too large number of cases. So it was considered impossible to win professional go player. However, with the use of MCTS(Monte Carlo Tree Search) and CNN(Convolutional Neural Network), the performance of the go algorithm is greatly improved. In this paper, using CNN and MCTS were proceeding development of go algorithm. Using the manual of go learning CNN look for the best position, MCTS calculates the win probability in the game to proceed with simulation. In addition, extract pattern information of go using existing manual of go, plans to improve speed and performance by using it. This method is showed a better performance than general go algorithm. Also if it is receiving sufficient computing power, it seems to be even more improved performance.

  • PDF

Implementation to eye motion tracking system using OpenCV and convolutional neural network (OpenCV 와 Convolutional neural network를 이용한 눈동자 모션인식 시스템 구현)

  • Lee, Seung Jun;Heo, Seung Won;Lee, Hee Bin;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.379-380
    • /
    • 2018
  • Previoisly presented "Implementation to pupil motion recognition system using convolution neural network".is improved. Using OpenCV, face and eye areas are detected, and then configure the neural network using Numpy. This pupil motion recognition system is based on the Numpy for configuring and calculating the neural network. This system is implemented on DE1-SOC.

  • PDF

A Method for accelerating training of Convolutional Neural Network (합성곱 신경망의 학습 가속화를 위한 방법)

  • Choi, Se Jin;Jung, Jun Mo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.3 no.4
    • /
    • pp.171-175
    • /
    • 2017
  • Recently, Training of the convolutional neural network (CNN) entails many iterative computations. Therefore, a method of accelerating the training speed through parallel processing using the hardware specifications of GPGPU is actively researched. In this paper, the operations of the feature extraction unit and the classification unit are divided into blocks and threads of GPGPU and processed in parallel. Convolution and Pooling operations of the feature extraction unit are processed in parallel at once without sequentially processing. As a result, proposed method improved the training time about 314%.

Deep Neural Network-based Jellyfish Distribution Recognition System Using a UAV (무인기를 이용한 심층 신경망 기반 해파리 분포 인식 시스템)

  • Koo, Jungmo;Myung, Hyun
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.4
    • /
    • pp.432-440
    • /
    • 2017
  • In this paper, we propose a jellyfish distribution recognition and monitoring system using a UAV (unmanned aerial vehicle). The UAV was designed to satisfy the requirements for flight in ocean environment. The target jellyfish, Aurelia aurita, is recognized through convolutional neural network and its distribution is calculated. The modified deep neural network architecture has been developed to have reliable recognition accuracy and fast operation speed. Recognition speed is about 400 times faster than GoogLeNet by using a lightweight network architecture. We also introduce the method for selecting candidates to be used as inputs to the proposed network. The recognition accuracy of the jellyfish is improved by removing the probability value of the meaningless class among the probability vectors of the evaluated input image and re-evaluating it by normalization. The jellyfish distribution is calculated based on the unit jellyfish image recognized. The distribution level is defined by using the novelty concept of the distribution map buffer.

Number Plate Detection with a Multi-Convolutional Neural Network Approach with Optical Character Recognition for Mobile Devices

  • Gerber, Christian;Chung, Mokdong
    • Journal of Information Processing Systems
    • /
    • v.12 no.1
    • /
    • pp.100-108
    • /
    • 2016
  • In this paper, we propose a method to achieve improved number plate detection for mobile devices by applying a multiple convolutional neural network (CNN) approach. First, we processed supervised CNN-verified car detection and then we applied the detected car regions to the next supervised CNN-verifier for number plate detection. In the final step, the detected number plate regions were verified through optical character recognition by another CNN-verifier. Since mobile devices are limited in computation power, we are proposing a fast method to recognize number plates. We expect for it to be used in the field of intelligent transportation systems.

Multi-resolution DenseNet based acoustic models for reverberant speech recognition (잔향 환경 음성인식을 위한 다중 해상도 DenseNet 기반 음향 모델)

  • Park, Sunchan;Jeong, Yongwon;Kim, Hyung Soon
    • Phonetics and Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.33-38
    • /
    • 2018
  • Although deep neural network-based acoustic models have greatly improved the performance of automatic speech recognition (ASR), reverberation still degrades the performance of distant speech recognition in indoor environments. In this paper, we adopt the DenseNet, which has shown great performance results in image classification tasks, to improve the performance of reverberant speech recognition. The DenseNet enables the deep convolutional neural network (CNN) to be effectively trained by concatenating feature maps in each convolutional layer. In addition, we extend the concept of multi-resolution CNN to multi-resolution DenseNet for robust speech recognition in reverberant environments. We evaluate the performance of reverberant speech recognition on the single-channel ASR task in reverberant voice enhancement and recognition benchmark (REVERB) challenge 2014. According to the experimental results, the DenseNet-based acoustic models show better performance than do the conventional CNN-based ones, and the multi-resolution DenseNet provides additional performance improvement.