• Title/Summary/Keyword: Adam optimizer

Search Result 29, Processing Time 0.021 seconds

Novel Optimizer AdamW+ implementation in LSTM Model for DGA Detection

  • Awais Javed;Adnan Rashdi;Imran Rashid;Faisal Amir
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.133-141
    • /
    • 2023
  • This work take deeper analysis of Adaptive Moment Estimation (Adam) and Adam with Weight Decay (AdamW) implementation in real world text classification problem (DGA Malware Detection). AdamW is introduced by decoupling weight decay from L2 regularization and implemented as improved optimizer. This work introduces a novel implementation of AdamW variant as AdamW+ by further simplifying weight decay implementation in AdamW. DGA malware detection LSTM models results for Adam, AdamW and AdamW+ are evaluated on various DGA families/ groups as multiclass text classification. Proposed AdamW+ optimizer results has shown improvement in all standard performance metrics over Adam and AdamW. Analysis of outcome has shown that novel optimizer has outperformed both Adam and AdamW text classification based problems.

Acoustic Full-waveform Inversion using Adam Optimizer (Adam Optimizer를 이용한 음향매질 탄성파 완전파형역산)

  • Kim, Sooyoon;Chung, Wookeen;Shin, Sungryul
    • Geophysics and Geophysical Exploration
    • /
    • v.22 no.4
    • /
    • pp.202-209
    • /
    • 2019
  • In this study, an acoustic full-waveform inversion using Adam optimizer was proposed. The steepest descent method, which is commonly used for the optimization of seismic waveform inversion, is fast and easy to apply, but the inverse problem does not converge correctly. Various optimization methods suggested as alternative solutions require large calculation time though they were much more accurate than the steepest descent method. The Adam optimizer is widely used in deep learning for the optimization of learning model. It is considered as one of the most effective optimization method for diverse models. Thus, we proposed seismic full-waveform inversion algorithm using the Adam optimizer for fast and accurate convergence. To prove the performance of the suggested inversion algorithm, we compared the updated P-wave velocity model obtained using the Adam optimizer with the inversion results from the steepest descent method. As a result, we confirmed that the proposed algorithm can provide fast error convergence and precise inversion results.

Improvement of multi layer perceptron performance using combination of adaptive moments and improved harmony search for prediction of Daecheong Dam inflow (대청댐 유입량 예측을 위한 Adaptive Moments와 Improved Harmony Search의 결합을 이용한 다층퍼셉트론 성능향상)

  • Lee, Won Jin;Lee, Eui Hoon
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.1
    • /
    • pp.63-74
    • /
    • 2023
  • High-reliability prediction of dam inflow is necessary for efficient dam operation. Recently, studies were conducted to predict the inflow of dams using Multi Layer Perceptron (MLP). Existing studies used the Gradient Descent (GD)-based optimizer as the optimizer among MLP operators to find the optimal correlation between data. However, the GD-based optimizers have disadvantages in that the prediction performance is deteriorated due to the possibility of convergence to the local optimal value and the absence of storage space. This study improved the shortcomings of the GD-based optimizer by developing Adaptive moments combined with Improved Harmony Search (AdamIHS), which combines Adaptive moments among GD-based optimizers and Improved Harmony Search (IHS). In order to evaluate the learning and prediction performance of MLP using AdamIHS, Daecheong Dam inflow was learned and predicted and compared with the learning and prediction performance of MLP using GD-based optimizer. Comparing the learning results, the Mean Squared Error (MSE) of MLP, which is 5 hidden layers using AdamIHS, was the lowest at 11,577. Comparing the prediction results, the average MSE of MLP, which is one hidden layer using AdamIHS, was the lowest at 413,262. Using AdamIHS developed in this study, it will be possible to show improved prediction performance in various fields.

FAST-ADAM in Semi-Supervised Generative Adversarial Networks

  • Kun, Li;Kang, Dae-Ki
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.4
    • /
    • pp.31-36
    • /
    • 2019
  • Unsupervised neural networks have not caught enough attention until Generative Adversarial Network (GAN) was proposed. By using both the generator and discriminator networks, GAN can extract the main characteristic of the original dataset and produce new data with similarlatent statistics. However, researchers understand fully that training GAN is not easy because of its unstable condition. The discriminator usually performs too good when helping the generator to learn statistics of the training datasets. Thus, the generated data is not compelling. Various research have focused on how to improve the stability and classification accuracy of GAN. However, few studies delve into how to improve the training efficiency and to save training time. In this paper, we propose a novel optimizer, named FAST-ADAM, which integrates the Lookahead to ADAM optimizer to train the generator of a semi-supervised generative adversarial network (SSGAN). We experiment to assess the feasibility and performance of our optimizer using Canadian Institute For Advanced Research - 10 (CIFAR-10) benchmark dataset. From the experiment results, we show that FAST-ADAM can help the generator to reach convergence faster than the original ADAM while maintaining comparable training accuracy results.

Comparison of Different Deep Learning Optimizers for Modeling Photovoltaic Power

  • Poudel, Prasis;Bae, Sang Hyun;Jang, Bongseog
    • Journal of Integrative Natural Science
    • /
    • v.11 no.4
    • /
    • pp.204-208
    • /
    • 2018
  • Comparison of different optimizer performance in photovoltaic power modeling using artificial neural deep learning techniques is described in this paper. Six different deep learning optimizers are tested for Long-Short-Term Memory networks in this study. The optimizers are namely Adam, Stochastic Gradient Descent, Root Mean Square Propagation, Adaptive Gradient, and some variants such as Adamax and Nadam. For comparing the optimization techniques, high and low fluctuated photovoltaic power output are examined and the power output is real data obtained from the site at Mokpo university. Using Python Keras version, we have developed the prediction program for the performance evaluation of the optimizations. The prediction error results of each optimizer in both high and low power cases shows that the Adam has better performance compared to the other optimizers.

Semantic Segmentation of the Submerged Marine Debris in Undersea Images Using HRNet Model (HRNet 기반 해양침적쓰레기 수중영상의 의미론적 분할)

  • Kim, Daesun;Kim, Jinsoo;Jang, Seonwoong;Bak, Suho;Gong, Shinwoo;Kwak, Jiwoo;Bae, Jaegu
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1329-1341
    • /
    • 2022
  • Destroying the marine environment and marine ecosystem and causing marine accidents, marine debris is generated every year, and among them, submerged marine debris is difficult to identify and collect because it is on the seabed. Therefore, deep-learning-based semantic segmentation was experimented on waste fish nets and waste ropes using underwater images to identify efficient collection and distribution. For segmentation, a high-resolution network (HRNet), a state-of-the-art deep learning technique, was used, and the performance of each optimizer was compared. In the segmentation result fish net, F1 score=(86.46%, 86.20%, 85.29%), IoU=(76.15%, 75.74%, 74.36%), For the rope F1 score=(80.49%, 80.48%, 77.86%), IoU=(67.35%, 67.33%, 63.75%) in the order of adaptive moment estimation (Adam), Momentum, and stochastic gradient descent (SGD). Adam's results were the highest in both fish net and rope. Through the research results, the evaluation of segmentation performance for each optimizer and the possibility of segmentation of marine debris in the latest deep learning technique were confirmed. Accordingly, it is judged that by applying the latest deep learning technique to the identification of submerged marine debris through underwater images, it will be helpful in estimating the distribution of marine sedimentation debris through more accurate and efficient identification than identification through the naked eye.

Performance Evaluation of U-net Deep Learning Model for Noise Reduction according to Various Hyper Parameters in Lung CT Images (폐 CT 영상에서의 노이즈 감소를 위한 U-net 딥러닝 모델의 다양한 학습 파라미터 적용에 따른 성능 평가)

  • Min-Gwan Lee;Chanrok Park
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.5
    • /
    • pp.709-715
    • /
    • 2023
  • In this study, the performance evaluation of image quality for noise reduction was implemented using the U-net deep learning architecture in computed tomography (CT) images. In order to generate input data, the Gaussian noise was applied to ground truth (GT) data, and datasets were consisted of 8:1:1 ratio of train, validation, and test sets among 1300 CT images. The Adagrad, Adam, and AdamW were used as optimizer function, and 10, 50 and 100 times for number of epochs were applied. In addition, learning rates of 0.01, 0.001, and 0.0001 were applied using the U-net deep learning model to compare the output image quality. To analyze the quantitative values, the peak signal to noise ratio (PSNR) and coefficient of variation (COV) were calculated. Based on the results, deep learning model was useful for noise reduction. We suggested that optimized hyper parameters for noise reduction in CT images were AdamW optimizer function, 100 times number of epochs and 0.0001 learning rates.

Improved Deep Learning Algorithm

  • Kim, Byung Joo
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.8 no.2
    • /
    • pp.119-127
    • /
    • 2018
  • Training a very large deep neural network can be painfully slow and prone to overfitting. Many researches have done for overcoming the problem. In this paper, a combination of early stopping and ADAM based deep neural network was presented. This form of deep network is useful for handling the big data because it automatically stop the training before overfitting occurs. Also generalization ability is better than pure deep neural network model.

Developing Sentimental Analysis System Based on Various Optimizer

  • Eom, Seong Hoon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.100-106
    • /
    • 2021
  • Over the past few decades, natural language processing research has not made much. However, the widespread use of deep learning and neural networks attracted attention for the application of neural networks in natural language processing. Sentiment analysis is one of the challenges of natural language processing. Emotions are things that a person thinks and feels. Therefore, sentiment analysis should be able to analyze the person's attitude, opinions, and inclinations in text or actual text. In the case of emotion analysis, it is a priority to simply classify two emotions: positive and negative. In this paper we propose the deep learning based sentimental analysis system according to various optimizer that is SGD, ADAM and RMSProp. Through experimental result RMSprop optimizer shows the best performance compared to others on IMDB data set. Future work is to find more best hyper parameter for sentimental analysis system.

Development of new artificial neural network optimizer to improve water quality index prediction performance (수질 지수 예측성능 향상을 위한 새로운 인공신경망 옵티마이저의 개발)

  • Ryu, Yong Min;Kim, Young Nam;Lee, Dae Won;Lee, Eui Hoon
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.2
    • /
    • pp.73-85
    • /
    • 2024
  • Predicting water quality of rivers and reservoirs is necessary for the management of water resources. Artificial Neural Networks (ANNs) have been used in many studies to predict water quality with high accuracy. Previous studies have used Gradient Descent (GD)-based optimizers as an optimizer, an operator of ANN that searches parameters. However, GD-based optimizers have the disadvantages of the possibility of local optimal convergence and absence of a solution storage and comparison structure. This study developed improved optimizers to overcome the disadvantages of GD-based optimizers. Proposed optimizers are optimizers that combine adaptive moments (Adam) and Nesterov-accelerated adaptive moments (Nadam), which have low learning errors among GD-based optimizers, with Harmony Search (HS) or Novel Self-adaptive Harmony Search (NSHS). To evaluate the performance of Long Short-Term Memory (LSTM) using improved optimizers, the water quality data from the Dasan water quality monitoring station were used for training and prediction. Comparing the learning results, Mean Squared Error (MSE) of LSTM using Nadam combined with NSHS (NadamNSHS) was the lowest at 0.002921. In addition, the prediction rankings according to MSE and R2 for the four water quality indices for each optimizer were compared. Comparing the average of ranking for each optimizer, it was confirmed that LSTM using NadamNSHS was the highest at 2.25.