• Title/Summary/Keyword: deep convolution neural network

Search Result 267, Processing Time 0.036 seconds

Applicability Evaluation for Discharge Model Using Curve Number and Convolution Neural Network (Curve Number 및 Convolution Neural Network를 이용한 유출모형의 적용성 평가)

  • Song, Chul Min;Lee, Kwang Hyun
    • Ecology and Resilient Infrastructure
    • /
    • v.7 no.2
    • /
    • pp.114-125
    • /
    • 2020
  • Despite the various artificial neural networks that have been developed, most of the discharge models in previous studies have been developed using deep neural networks. This study aimed to develop a discharge model using a convolution neural network (CNN), which was used to solve classification problems. Furthermore, the applicability of CNN was evaluated. The photographs (pictures or images) for input data to CNN could not clearly show the characteristics of the study area as well as precipitation. Hence, the model employed in this study had to use numerical images. To solve the problem, the CN of NRCS was used to generate images as input data for the model. The generated images showed a good possibility of applicability as input data. Moreover, a new application of CN, which had been used only for discharge prediction, was proposed in this study. As a result of CNN training, the model was trained and generalized stably. Comparison between the actual and predicted values had an R2 of 0.79, which was relatively high. The model showed good performance in terms of the Pearson correlation coefficient (0.84), the Nash-Sutcliffe efficiency (NSE) (0.63), and the root mean square error (24.54 ㎥/s).

A Parallel Deep Convolutional Neural Network for Alzheimer's disease classification on PET/CT brain images

  • Baydargil, Husnu Baris;Park, Jangsik;Kang, Do-Young;Kang, Hyun;Cho, Kook
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3583-3597
    • /
    • 2020
  • In this paper, a parallel deep learning model using a convolutional neural network and a dilated convolutional neural network is proposed to classify Alzheimer's disease with high accuracy in PET/CT images. The developed model consists of two pipelines, a conventional CNN pipeline, and a dilated convolution pipeline. An input image is sent through both pipelines, and at the end of both pipelines, extracted features are concatenated and used for classifying Alzheimer's disease. Complimentary abilities of both networks provide better overall accuracy than single conventional CNNs in the dataset. Moreover, instead of performing binary classification, the proposed model performs three-class classification being Alzheimer's disease, mild cognitive impairment, and normal control. Using the data received from Dong-a University, the model performs classification detecting Alzheimer's disease with an accuracy of up to 95.51%.

Design of a Deep Neural Network Model for Image Caption Generation (이미지 캡션 생성을 위한 심층 신경망 모델의 설계)

  • Kim, Dongha;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.4
    • /
    • pp.203-210
    • /
    • 2017
  • In this paper, we propose an effective neural network model for image caption generation and model transfer. This model is a kind of multi-modal recurrent neural network models. It consists of five distinct layers: a convolution neural network layer for extracting visual information from images, an embedding layer for converting each word into a low dimensional feature, a recurrent neural network layer for learning caption sentence structure, and a multi-modal layer for combining visual and language information. In this model, the recurrent neural network layer is constructed by LSTM units, which are well known to be effective for learning and transferring sequence patterns. Moreover, this model has a unique structure in which the output of the convolution neural network layer is linked not only to the input of the initial state of the recurrent neural network layer but also to the input of the multimodal layer, in order to make use of visual information extracted from the image at each recurrent step for generating the corresponding textual caption. Through various comparative experiments using open data sets such as Flickr8k, Flickr30k, and MSCOCO, we demonstrated the proposed multimodal recurrent neural network model has high performance in terms of caption accuracy and model transfer effect.

Pixel-based crack image segmentation in steel structures using atrous separable convolution neural network

  • Ta, Quoc-Bao;Pham, Quang-Quang;Kim, Yoon-Chul;Kam, Hyeon-Dong;Kim, Jeong-Tae
    • Structural Monitoring and Maintenance
    • /
    • v.9 no.3
    • /
    • pp.289-303
    • /
    • 2022
  • In this study, the impact of assigned pixel labels on the accuracy of crack image identification of steel structures is examined by using an atrous separable convolution neural network (ASCNN). Firstly, images containing fatigue cracks collected from steel structures are classified into four datasets by assigning different pixel labels based on image features. Secondly, the DeepLab v3+ algorithm is used to determine optimal parameters of the ASCNN model by maximizing the average mean-intersection-over-union (mIoU) metric of the datasets. Thirdly, the ASCNN model is trained for various image sizes and hyper-parameters, such as the learning rule, learning rate, and epoch. The optimal parameters of the ASCNN model are determined based on the average mIoU metric. Finally, the trained ASCNN model is evaluated by using 10% untrained images. The result shows that the ASCNN model can segment cracks and other objects in the captured images with an average mIoU of 0.716.

The training of convolution neural network for advanced driver assistant system

  • Nam, Kihun;Jeon, Heekyeong
    • International Journal of Advanced Culture Technology
    • /
    • v.4 no.4
    • /
    • pp.23-29
    • /
    • 2016
  • In this paper, the learning technique for CNN processor on vehicle is proposed. In the case of conventional CNN processors, weighted values learned through training are stored for use, but when there is distortion in the image due to the weather conditions, the accuracy is decreased. Therefore, the method of enhancing the input image for classification is general, but it has the weakness of increasing the processor size. To solve this problem, the CNN performance was improved in this paper through the learning method of the distorted image. As a result, the proposed method showed improvement of approximately 38% better accuracy than the conventional method.

Pose Estimation with Binarized Multi-Scale Module

  • Choi, Yong-Gyun;Lee, Sukho
    • International journal of advanced smart convergence
    • /
    • v.7 no.2
    • /
    • pp.95-100
    • /
    • 2018
  • In this paper, we propose a binarized multi-scale module to accelerate the speed of the pose estimating deep neural network. Recently, deep learning is also used for fine-tuned tasks such as pose estimation. One of the best performing pose estimation methods is based on the usage of two neural networks where one computes the heat maps of the body parts and the other computes the part affinity fields between the body parts. However, the convolution filtering with a large kernel filter takes much time in this model. To accelerate the speed in this model, we propose to change the large kernel filters with binarized multi-scale modules. The large receptive field is captured by the multi-scale structure which also prevents the dropdown of the accuracy in the binarized module. The computation cost and number of parameters becomes small which results in increased speed performance.

Design and Implementation of Human and Object Classification System Using FMCW Radar Sensor (FMCW 레이다 센서 기반 사람과 사물 분류 시스템 설계 및 구현)

  • Sim, Yunsung;Song, Seungjun;Jang, Seonyoung;Jung, Yunho
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.364-372
    • /
    • 2022
  • This paper proposes the design and implementation results for human and object classification systems utilizing frequency modulated continuous wave (FMCW) radar sensor. Such a system requires the process of radar sensor signal processing for multi-target detection and the process of deep learning for the classification of human and object. Since deep learning requires such a great amount of computation and data processing, the lightweight process is utmost essential. Therefore, binary neural network (BNN) structure was adopted, operating convolution neural network (CNN) computation in a binary condition. In addition, for the real-time operation, a hardware accelerator was implemented and verified via FPGA platform. Based on performance evaluation and verified results, it is confirmed that the accuracy for multi-target classification of 90.5%, reduced memory usage by 96.87% compared to CNN and the run time of 5ms are achieved.

Machine Tool State Monitoring Using Hierarchical Convolution Neural Network (계층적 컨볼루션 신경망을 이용한 공작기계의 공구 상태 진단)

  • Kyeong-Min Lee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.2
    • /
    • pp.84-90
    • /
    • 2022
  • Machine tool state monitoring is a process that automatically detects the states of machine. In the manufacturing process, the efficiency of machining and the quality of the product are affected by the condition of the tool. Wear and broken tools can cause more serious problems in process performance and lower product quality. Therefore, it is necessary to develop a system to prevent tool wear and damage during the process so that the tool can be replaced in a timely manner. This paper proposes a method for diagnosing five tool states using a deep learning-based hierarchical convolutional neural network to change tools at the right time. The one-dimensional acoustic signal generated when the machine cuts the workpiece is converted into a frequency-based power spectral density two-dimensional image and use as an input for a convolutional neural network. The learning model diagnoses five tool states through three hierarchical steps. The proposed method showed high accuracy compared to the conventional method. In addition, it will be able to be utilized in a smart factory fault diagnosis system that can monitor various machine tools through real-time connecting.

Evaluation of Classification and Accuracy in Chest X-ray Images using Deep Learning with Convolution Neural Network (컨볼루션 뉴럴 네트워크 기반의 딥러닝을 이용한 흉부 X-ray 영상의 분류 및 정확도 평가)

  • Song, Ho-Jun;Lee, Eun-Byeol;Jo, Heung-Joon;Park, Se-Young;Kim, So-Young;Kim, Hyeon-Jeong;Hong, Joo-Wan
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.1
    • /
    • pp.39-44
    • /
    • 2020
  • The purpose of this study was learning about chest X-ray image classification and accuracy research through Deep Learning using big data technology with Convolution Neural Network. Normal 1,583 and Pneumonia 4,289 were used in chest X-ray images. The data were classified as train (88.8%), validation (0.2%) and test (11%). Constructed as Convolution Layer, Max pooling layer size 2×2, Flatten layer, and Image Data Generator. The number of filters, filter size, drop out, epoch, batch size, and loss function values were set when the Convolution layer were 3 and 4 respectively. The test data verification results showed that the predicted accuracy was 94.67% when the number of filters was 64-128-128-128, filter size 3×3, drop out 0.25, epoch 5, batch size 15, and loss function RMSprop was 4. In this study, the classification of chest X-ray Normal and Pneumonia was predictable with high accuracy, and it is believed to be of great help not only to chest X-ray images but also to other medical images.

Overlapped Image Learning Neural Network for Autonomous Driving in the Indoor Environment (실내 환경에서의 자율주행을 위한 중첩 이미지 학습 신경망)

  • Jo, Jeong-won;Lee, Chang-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.349-350
    • /
    • 2019
  • The autonomous driving drones experimented in the existing indoor corridor environment was a way to give the steering command to the drones by the neural network operation of the notebook due to the limitation of the operation performance of the drones. In this paper, to overcome these limitations, we have studied autonomous driving in indoor corridor environment using NVIDIA Jetson TX2 board.

  • PDF