• Title/Summary/Keyword: Dropout Layers

Search Result 14, Processing Time 0.021 seconds

A Deep Neural Network Model Based on a Mutation Operator (돌연변이 연산 기반 효율적 심층 신경망 모델)

  • Jeon, Seung Ho;Moon, Jong Sub
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.12
    • /
    • pp.573-580
    • /
    • 2017
  • Deep Neural Network (DNN) is a large layered neural network which is consisted of a number of layers of non-linear units. Deep Learning which represented as DNN has been applied very successfully in various applications. However, many issues in DNN have been identified through past researches. Among these issues, generalization is the most well-known problem. A Recent study, Dropout, successfully addressed this problem. Also, Dropout plays a role as noise, and so it helps to learn robust feature during learning in DNN such as Denoising AutoEncoder. However, because of a large computations required in Dropout, training takes a lot of time. Since Dropout keeps changing an inter-layer representation during the training session, the learning rates should be small, which makes training time longer. In this paper, using mutation operation, we reduce computation and improve generalization performance compared with Dropout. Also, we experimented proposed method to compare with Dropout method and showed that our method is superior to the Dropout one.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Research on a handwritten character recognition algorithm based on an extended nonlinear kernel residual network

  • Rao, Zheheng;Zeng, Chunyan;Wu, Minghu;Wang, Zhifeng;Zhao, Nan;Liu, Min;Wan, Xiangkui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.413-435
    • /
    • 2018
  • Although the accuracy of handwritten character recognition based on deep networks has been shown to be superior to that of the traditional method, the use of an overly deep network significantly increases time consumption during parameter training. For this reason, this paper took the training time and recognition accuracy into consideration and proposed a novel handwritten character recognition algorithm with newly designed network structure, which is based on an extended nonlinear kernel residual network. This network is a non-extremely deep network, and its main design is as follows:(1) Design of an unsupervised apriori algorithm for intra-class clustering, making the subsequent network training more pertinent; (2) presentation of an intermediate convolution model with a pre-processed width level of 2;(3) presentation of a composite residual structure that designs a multi-level quick link; and (4) addition of a Dropout layer after the parameter optimization. The algorithm shows superior results on MNIST and SVHN dataset, which are two character benchmark recognition datasets, and achieves better recognition accuracy and higher recognition efficiency than other deep structures with the same number of layers.

Prediction of Jamming Techniques by Using LSTM (LSTM을 이용한 재밍 기법 예측)

  • Lee, Gyeong-Hoon;Jo, Jeil;Park, Cheong Hee
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.22 no.2
    • /
    • pp.278-286
    • /
    • 2019
  • Conventional methods for selecting jamming techniques in electronic warfare are based on libraries in which a list of jamming techniques for radar signals is recorded. However, the choice of jamming techniques by the library is limited when modified signals are received. In this paper, we propose a method to predict the jamming technique for radar signals by using deep learning methods. Long short-term memory(LSTM) is a deep running method which is effective for learning the time dependent relationship in sequential data. In order to determine the optimal LSTM model structure for jamming technique prediction, we test the learning parameter values that should be selected, such as the number of LSTM layers, the number of fully-connected layers, optimization methods, the size of the mini batch, and dropout ratio. Experimental results demonstrate the competent performance of the LSTM model in predicting the jamming technique for radar signals.

A Smart Closet Using Deep Learning and Image Recognition for the Blind (시각장애인을 위한 딥러닝과 이미지인식을 이용한 스마트 옷장)

  • Choi, So-Hee;Kim, Ju-Ha;Oh, Jae-Dong;Kong, Ki-Sok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.51-58
    • /
    • 2020
  • The blind people have difficulty living an independent clothing life. The furniture and home appliance are adding AI or IoT with the recent growth of the smart appliance market. To support the independent clothing life of the blind, this paper suggests a smart wardrobe with closet control function, voice recognition function and clothes information recognition using CNN algorithm. The number of layers of the model was changed and Maxpooling was adjusted to create the model to increase accuracy in the process of recognizing clothes. Early Stopping Callback option is applied to ensure learning accuracy when creating a model. We added Dropout to prevent overfitting. The final model created by this process can be found to have 80 percent accuracy in clothing recognition.

Prediction of Asphalt Pavement Service Life using Deep Learning (딥러닝을 활용한 일반국도 아스팔트포장의 공용수명 예측)

  • Choi, Seunghyun;Do, Myungsik
    • International Journal of Highway Engineering
    • /
    • v.20 no.2
    • /
    • pp.57-65
    • /
    • 2018
  • PURPOSES : The study aims to predict the service life of national highway asphalt pavements through deep learning methods by using maintenance history data of the National Highway Pavement Management System. METHODS : For the configuration of a deep learning network, this study used Tensorflow 1.5, an open source program which has excellent usability among deep learning frameworks. For the analysis, nine variables of cumulative annual average daily traffic, cumulative equivalent single axle loads, maintenance layer, surface, base, subbase, anti-frost layer, structural number of pavement, and region were selected as input data, while service life was chosen to construct the input layer and output layers as output data. Additionally, for scenario analysis, in this study, a model was formed with four different numbers of 1, 2, 4, and 8 hidden layers and a simulation analysis was performed according to the applicability of the over fitting resolution algorithm. RESULTS : The results of the analysis have shown that regardless of the number of hidden layers, when an over fitting resolution algorithm, such as dropout, is applied, the prediction capability is improved as the coefficient of determination ($R^2$) of the test data increases. Furthermore, the result of the sensitivity analysis of the applicability of region variables demonstrates that estimating service life requires sufficient consideration of regional characteristics as $R^2$ had a maximum of between 0.73 and 0.84, when regional variables where taken into consideration. CONCLUSIONS : As a result, this study proposes that it is possible to precisely predict the service life of national highway pavement sections with the consideration of traffic, pavement thickness, and regional factors and concludes that the use of the prediction of service life is fundamental data in decision making within pavement management systems.

Deep learning-based AI constitutive modeling for sandstone and mudstone under cyclic loading conditions

  • Luyuan Wu;Meng Li;Jianwei Zhang;Zifa Wang;Xiaohui Yang;Hanliang Bian
    • Geomechanics and Engineering
    • /
    • v.37 no.1
    • /
    • pp.49-64
    • /
    • 2024
  • Rocks undergoing repeated loading and unloading over an extended period, such as due to earthquakes, human excavation, and blasting, may result in the gradual accumulation of stress and deformation within the rock mass, eventually reaching an unstable state. In this study, a CNN-CCM is proposed to address the mechanical behavior. The structure and hyperparameters of CNN-CCM include Conv2D layers × 5; Max pooling2D layers × 4; Dense layers × 4; learning rate=0.001; Epoch=50; Batch size=64; Dropout=0.5. Training and validation data for deep learning include 71 rock samples and 122,152 data points. The AI Rock Constitutive Model learned by CNN-CCM can predict strain values(ε1) using Mass (M), Axial stress (σ1), Density (ρ), Cyclic number (N), Confining pressure (σ3), and Young's modulus (E). Five evaluation indicators R2, MAPE, RMSE, MSE, and MAE yield respective values of 0.929, 16.44%, 0.954, 0.913, and 0.542, illustrating good predictive performance and generalization ability of model. Finally, interpreting the AI Rock Constitutive Model using the SHAP explaining method reveals that feature importance follows the order N > M > σ1 > E > ρ > σ3.Positive SHAP values indicate positive effects on predicting strain ε1 for N, M, σ1, and σ3, while negative SHAP values have negative effects. For E, a positive value has a negative effect on predicting strain ε1, consistent with the influence patterns of conventional physical rock constitutive equations. The present study offers a novel approach to the investigation of the mechanical constitutive model of rocks under cyclic loading and unloading conditions.

Prediction of ship power based on variation in deep feed-forward neural network

  • Lee, June-Beom;Roh, Myung-Il;Kim, Ki-Su
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.13 no.1
    • /
    • pp.641-649
    • /
    • 2021
  • Fuel oil consumption (FOC) must be minimized to determine the economic route of a ship; hence, the ship power must be predicted prior to route planning. For this purpose, a numerical method using test results of a model has been widely used. However, predicting ship power using this method is challenging owing to the uncertainty of the model test. An onboard test should be conducted to solve this problem; however, it requires considerable resources and time. Therefore, in this study, a deep feed-forward neural network (DFN) is used to predict ship power using deep learning methods that involve data pattern recognition. To use data in the DFN, the input data and a label (output of prediction) should be configured. In this study, the input data are configured using ocean environmental data (wave height, wave period, wave direction, wind speed, wind direction, and sea surface temperature) and the ship's operational data (draft, speed, and heading). The ship power is selected as the label. In addition, various treatments have been used to improve the prediction accuracy. First, ocean environmental data related to wind and waves are preprocessed using values relative to the ship's velocity. Second, the structure of the DFN is changed based on the characteristics of the input data. Third, the prediction accuracy is analyzed using a combination comprising five hyperparameters (number of hidden layers, number of hidden nodes, learning rate, dropout, and gradient optimizer). Finally, k-means clustering is performed to analyze the effect of the sea state and ship operational status by categorizing it into several models. The performances of various prediction models are compared and analyzed using the DFN in this study.

A ResNet based multiscale feature extraction for classifying multi-variate medical time series

  • Zhu, Junke;Sun, Le;Wang, Yilin;Subramani, Sudha;Peng, Dandan;Nicolas, Shangwe Charmant
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.5
    • /
    • pp.1431-1445
    • /
    • 2022
  • We construct a deep neural network model named ECGResNet. This model can diagnosis diseases based on 12-lead ECG data of eight common cardiovascular diseases with a high accuracy. We chose the 16 Blocks of ResNet50 as the main body of the model and added the Squeeze-and-Excitation module to learn the data information between channels adaptively. We modified the first convolutional layer of ResNet50 which has a convolutional kernel of 7 to a superposition of convolutional kernels of 8 and 16 as our feature extraction method. This way allows the model to focus on the overall trend of the ECG signal while also noticing subtle changes. The model further improves the accuracy of cardiovascular and cerebrovascular disease classification by using a fully connected layer that integrates factors such as gender and age. The ECGResNet model adds Dropout layers to both the residual block and SE module of ResNet50, further avoiding the phenomenon of model overfitting. The model was eventually trained using a five-fold cross-validation and Flooding training method, with an accuracy of 95% on the test set and an F1-score of 0.841.We design a new deep neural network, innovate a multi-scale feature extraction method, and apply the SE module to extract features of ECG data.

Enhancement of concrete crack detection using U-Net

  • Molaka Maruthi;Lee, Dong Eun;Kim Bubryur
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.152-159
    • /
    • 2024
  • Cracks in structural materials present a critical challenge to infrastructure safety and long-term durability. Timely and precise crack detection is essential for proactive maintenance and the prevention of catastrophic structural failures. This study introduces an innovative approach to tackle this issue using U-Net deep learning architecture. The primary objective of the intended research is to explore the potential of U-Net in enhancing the precision and efficiency of crack detection across various concrete crack detection under various environmental conditions. Commencing with the assembling by a comprehensive dataset featuring diverse images of concrete cracks, optimizing crack visibility and facilitating feature extraction through advanced image processing techniques. A wide range of concrete crack images were collected and used advanced techniques to enhance their visibility. The U-Net model, well recognized for its proficiency in image segmentation tasks, is implemented to achieve precise segmentation and localization of concrete cracks. In terms of accuracy, our research attests to a substantial advancement in automated of 95% across all tested concrete materials, surpassing traditional manual inspection methods. The accuracy extends to detecting cracks of varying sizes, orientations, and challenging lighting conditions, underlining the systems robustness and reliability. The reliability of the proposed model is measured using performance metrics such as, precision(93%), Recall(96%), and F1-score(94%). For validation, the model was tested on a different set of data and confirmed an accuracy of 94%. The results shows that the system consistently performs well, even with different concrete types and lighting conditions. With real-time monitoring capabilities, the system ensures the prompt detection of cracks as they emerge, holding significant potential for reducing risks associated with structural damage and achieving substantial cost savings.