• Title/Summary/Keyword: Deep learning CNN

Search Result 1,086, Processing Time 0.022 seconds

Pedestrian Classification using CNN's Deep Features and Transfer Learning (CNN의 깊은 특징과 전이학습을 사용한 보행자 분류)

  • Chung, Soyoung;Chung, Min Gyo
    • Journal of Internet Computing and Services
    • /
    • v.20 no.4
    • /
    • pp.91-102
    • /
    • 2019
  • In autonomous driving systems, the ability to classify pedestrians in images captured by cameras is very important for pedestrian safety. In the past, after extracting features of pedestrians with HOG(Histogram of Oriented Gradients) or SIFT(Scale-Invariant Feature Transform), people classified them using SVM(Support Vector Machine). However, extracting pedestrian characteristics in such a handcrafted manner has many limitations. Therefore, this paper proposes a method to classify pedestrians reliably and effectively using CNN's(Convolutional Neural Network) deep features and transfer learning. We have experimented with both the fixed feature extractor and the fine-tuning methods, which are two representative transfer learning techniques. Particularly, in the fine-tuning method, we have added a new scheme, called M-Fine(Modified Fine-tuning), which divideslayers into transferred parts and non-transferred parts in three different sizes, and adjusts weights only for layers belonging to non-transferred parts. Experiments on INRIA Person data set with five CNN models(VGGNet, DenseNet, Inception V3, Xception, and MobileNet) showed that CNN's deep features perform better than handcrafted features such as HOG and SIFT, and that the accuracy of Xception (threshold = 0.5) isthe highest at 99.61%. MobileNet, which achieved similar performance to Xception and learned 80% fewer parameters, was the best in terms of efficiency. Among the three transfer learning schemes tested above, the performance of the fine-tuning method was the best. The performance of the M-Fine method was comparable to or slightly lower than that of the fine-tuningmethod, but higher than that of the fixed feature extractor method.

Damage detection in structures using modal curvatures gapped smoothing method and deep learning

  • Nguyen, Duong Huong;Bui-Tien, T.;Roeck, Guido De;Wahab, Magd Abdel
    • Structural Engineering and Mechanics
    • /
    • v.77 no.1
    • /
    • pp.47-56
    • /
    • 2021
  • This paper deals with damage detection using a Gapped Smoothing Method (GSM) combined with deep learning. Convolutional Neural Network (CNN) is a model of deep learning. CNN has an input layer, an output layer, and a number of hidden layers that consist of convolutional layers. The input layer is a tensor with shape (number of images) × (image width) × (image height) × (image depth). An activation function is applied each time to this tensor passing through a hidden layer and the last layer is the fully connected layer. After the fully connected layer, the output layer, which is the final layer, is predicted by CNN. In this paper, a complete machine learning system is introduced. The training data was taken from a Finite Element (FE) model. The input images are the contour plots of curvature gapped smooth damage index. A free-free beam is used as a case study. In the first step, the FE model of the beam was used to generate data. The collected data were then divided into two parts, i.e. 70% for training and 30% for validation. In the second step, the proposed CNN was trained using training data and then validated using available data. Furthermore, a vibration experiment on steel damaged beam in free-free support condition was carried out in the laboratory to test the method. A total number of 15 accelerometers were set up to measure the mode shapes and calculate the curvature gapped smooth of the damaged beam. Two scenarios were introduced with different severities of the damage. The results showed that the trained CNN was successful in detecting the location as well as the severity of the damage in the experimental damaged beam.

Automated ground penetrating radar B-scan detection enhanced by data augmentation techniques

  • Donghwi Kim;Jihoon Kim;Heejung Youn
    • Geomechanics and Engineering
    • /
    • v.38 no.1
    • /
    • pp.29-44
    • /
    • 2024
  • This research investigates the effectiveness of data augmentation techniques in the automated analysis of B-scan images from ground-penetrating radar (GPR) using deep learning. In spite of the growing interest in automating GPR data analysis and advancements in deep learning for image classification and object detection, many deep learning-based GPR data analysis studies have been limited by the availability of large, diverse GPR datasets. Data augmentation techniques are widely used in deep learning to improve model performance. In this study, we applied four data augmentation techniques (geometric transformation, color-space transformation, noise injection, and applying kernel filter) to the GPR datasets obtained from a testbed. A deep learning model for GPR data analysis was developed using three models (Faster R-CNN ResNet, SSD ResNet, and EfficientDet) based on transfer learning. It was found that data augmentation significantly enhances model performance across all cases, with the mAP and AR for the Faster R-CNN ResNet model increasing by approximately 4%, achieving a maximum mAP (Intersection over Union = 0.5:1.0) of 87.5% and maximum AR of 90.5%. These results highlight the importance of data augmentation in improving the robustness and accuracy of deep learning models for GPR B-scan analysis. The enhanced detection capabilities achieved through these techniques contribute to more reliable subsurface investigations in geotechnical engineering.

An Experimental Comparison of CNN-based Deep Learning Algorithms for Recognition of Beauty-related Skin Disease

  • Bae, Chang-Hui;Cho, Won-Young;Kim, Hyeong-Jun;Ha, Ok-Kyoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.12
    • /
    • pp.25-34
    • /
    • 2020
  • In this paper, we empirically compare the effectiveness of training models to recognize beauty-related skin disease using supervised deep learning algorithms. Recently, deep learning algorithms are being actively applied for various fields such as industry, education, and medical. For instance, in the medical field, the ability to diagnose cutaneous cancer using deep learning based artificial intelligence has improved to the experts level. However, there are still insufficient cases applied to disease related to skin beauty. This study experimentally compares the effectiveness of identifying beauty-related skin disease by applying deep learning algorithms, considering CNN, ResNet, and SE-ResNet. The experimental results using these training models show that the accuracy of CNN is 71.5% on average, ResNet is 90.6% on average, and SE-ResNet is 95.3% on average. In particular, the SE-ResNet-50 model, which is a SE-ResNet algorithm with 50 hierarchical structures, showed the most effective result for identifying beauty-related skin diseases with an average accuracy of 96.2%. The purpose of this paper is to study effective training and methods of deep learning algorithms in consideration of the identification for beauty-related skin disease. Thus, it will be able to contribute to the development of services used to treat and easy the skin disease.

Prediction Model of Software Fault using Deep Learning Methods (딥러닝 기법을 사용하는 소프트웨어 결함 예측 모델)

  • Hong, Euyseok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.4
    • /
    • pp.111-117
    • /
    • 2022
  • Many studies have been conducted on software fault prediction models for decades, and the models using machine learning techniques showed the best performance. Deep learning techniques have become the most popular in the field of machine learning, but few studies have used them as classifiers for fault prediction models. Some studies have used deep learning to obtain semantic information from the model input source code or syntactic data. In this paper, we produced several models by changing the model structure and hyperparameters using MLP with three or more hidden layers. As a result of the model evaluation experiment, the MLP-based deep learning models showed similar performance to the existing models in terms of Accuracy, but significantly better in AUC. It also outperformed another deep learning model, the CNN model.

Multi-view learning review: understanding methods and their application (멀티 뷰 기법 리뷰: 이해와 응용)

  • Bae, Kang Il;Lee, Yung Seop;Lim, Changwon
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.1
    • /
    • pp.41-68
    • /
    • 2019
  • Multi-view learning considers data from various viewpoints as well as attempts to integrate various information from data. Multi-view learning has been studied recently and has showed superior performance to a model learned from only a single view. With the introduction of deep learning techniques to a multi-view learning approach, it has showed good results in various fields such as image, text, voice, and video. In this study, we introduce how multi-view learning methods solve various problems faced in human behavior recognition, medical areas, information retrieval and facial expression recognition. In addition, we review data integration principles of multi-view learning methods by classifying traditional multi-view learning methods into data integration, classifiers integration, and representation integration. Finally, we examine how CNN, RNN, RBM, Autoencoder, and GAN, which are commonly used among various deep learning methods, are applied to multi-view learning algorithms. We categorize CNN and RNN-based learning methods as supervised learning, and RBM, Autoencoder, and GAN-based learning methods as unsupervised learning.

Atypical Character Recognition Based on Mask R-CNN for Hangul Signboard

  • Lim, Sooyeon
    • International journal of advanced smart convergence
    • /
    • v.8 no.3
    • /
    • pp.131-137
    • /
    • 2019
  • This study proposes a method of learning and recognizing the characteristics that are the classification criteria of Hangul using Mask R-CNN, one of the deep learning techniques, to recognize and classify atypical Hangul characters. The atypical characters on the Hangul signboard have a lot of deformed and colorful shapes beyond the general characters. Therefore, in order to recognize the Hangul signboard character, it is necessary to learn a separate atypical Hangul character rather than the existing formulaic one. We selected the Hangul character '닭' as sample data and constructed 5,383 Hangul image data sets and used them for learning and verifying the deep learning model. The accuracy of the results of analyzing the performance of the learning model using the test set constructed to verify the reliability of the learning model was about 92.65% (the area detection rate). Therefore we confirmed that the proposed method is very useful for Hangul signboard character recognition, and we plan to extend it to various Hangul data.

Gesture-Based Emotion Recognition by 3D-CNN and LSTM with Keyframes Selection

  • Ly, Son Thai;Lee, Guee-Sang;Kim, Soo-Hyung;Yang, Hyung-Jeong
    • International Journal of Contents
    • /
    • v.15 no.4
    • /
    • pp.59-64
    • /
    • 2019
  • In recent years, emotion recognition has been an interesting and challenging topic. Compared to facial expressions and speech modality, gesture-based emotion recognition has not received much attention with only a few efforts using traditional hand-crafted methods. These approaches require major computational costs and do not offer many opportunities for improvement as most of the science community is conducting their research based on the deep learning technique. In this paper, we propose an end-to-end deep learning approach for classifying emotions based on bodily gestures. In particular, the informative keyframes are first extracted from raw videos as input for the 3D-CNN deep network. The 3D-CNN exploits the short-term spatiotemporal information of gesture features from selected keyframes, and the convolutional LSTM networks learn the long-term feature from the features results of 3D-CNN. The experimental results on the FABO dataset exceed most of the traditional methods results and achieve state-of-the-art results for the deep learning-based technique for gesture-based emotion recognition.

Malware Classification using Dynamic Analysis with Deep Learning

  • Asad Amin;Muhammad Nauman Durrani;Nadeem Kafi;Fahad Samad;Abdul Aziz
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.49-62
    • /
    • 2023
  • There has been a rapid increase in the creation and alteration of new malware samples which is a huge financial risk for many organizations. There is a huge demand for improvement in classification and detection mechanisms available today, as some of the old strategies like classification using mac learning algorithms were proved to be useful but cannot perform well in the scalable auto feature extraction scenario. To overcome this there must be a mechanism to automatically analyze malware based on the automatic feature extraction process. For this purpose, the dynamic analysis of real malware executable files has been done to extract useful features like API call sequence and opcode sequence. The use of different hashing techniques has been analyzed to further generate images and convert them into image representable form which will allow us to use more advanced classification approaches to classify huge amounts of images using deep learning approaches. The use of deep learning algorithms like convolutional neural networks enables the classification of malware by converting it into images. These images when fed into the CNN after being converted into the grayscale image will perform comparatively well in case of dynamic changes in malware code as image samples will be changed by few pixels when classified based on a greyscale image. In this work, we used VGG-16 architecture of CNN for experimentation.

Implementation of Instruction-Level Disassembler Based on Power Consumption Traces Using CNN (CNN을 이용한 소비 전력 파형 기반 명령어 수준 역어셈블러 구현)

  • Bae, Daehyeon;Ha, Jaecheol
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.4
    • /
    • pp.527-536
    • /
    • 2020
  • It has been found that an attacker can extract the secret key embedded in a security device and recover the operation instruction using power consumption traces which are some kind of side channel information. Many profiling-based side channel attacks based on a deep learning model such as MLP(Multi-Layer Perceptron) method are recently researched. In this paper, we implemented a disassembler for operation instruction set used in the micro-controller AVR XMEGA128-D4. After measuring the template traces on each instruction, we automatically made the pre-processing process and classified the operation instruction set using a deep learning model CNN. As an experimental result, we showed that all instructions are classified with 87.5% accuracy and some core instructions used frequently in device operation are with 99.6% respectively.