• Title/Summary/Keyword: Gradient Descent Learning

Search Result 153, Processing Time 0.025 seconds

Nonlinear Prediction of Time Series Using Multilayer Neural Networks of Hybrid Learning Algorithm (하이브리드 학습알고리즘의 다층신경망을 이용한 시급수의 비선형예측)

  • 조용현;김지영
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1281-1284
    • /
    • 1998
  • This paper proposes an efficient time series prediction of the nonlinear dynamical discrete-time systems using multilayer neural networks of a hybrid learning algorithm. The proposed learning algorithm is a hybrid backpropagation algorithm based on the steepest descent for high-speed optimization and the dynamic tunneling for global optimization. The proposed algorithm has been applied to the y00 samples of 700 sequences to predict the next 100 samples. The simulation results shows that the proposed algorithm has better performances of the convergence and the prediction, in comparision with that using backpropagation algorithm based on the gradient descent for multilayer neural network.

  • PDF

Time Series Prediction Using a Multi-layer Neural Network with Low Pass Filter Characteristics (저주파 필터 특성을 갖는 다층 구조 신경망을 이용한 시계열 데이터 예측)

  • Min-Ho Lee
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.21 no.1
    • /
    • pp.66-70
    • /
    • 1997
  • In this paper a new learning algorithm for curvature smoothing and improved generalization for multi-layer neural networks is proposed. To enhance the generalization ability a constraint term of hidden neuron activations is added to the conventional output error, which gives the curvature smoothing characteristics to multi-layer neural networks. When the total cost consisted of the output error and hidden error is minimized by gradient-descent methods, the additional descent term gives not only the Hebbian learning but also the synaptic weight decay. Therefore it incorporates error back-propagation, Hebbian, and weight decay, and additional computational requirements to the standard error back-propagation is negligible. From the computer simulation of the time series prediction with Santafe competition data it is shown that the proposed learning algorithm gives much better generalization performance.

  • PDF

Comparison of Different Deep Learning Optimizers for Modeling Photovoltaic Power

  • Poudel, Prasis;Bae, Sang Hyun;Jang, Bongseog
    • Journal of Integrative Natural Science
    • /
    • v.11 no.4
    • /
    • pp.204-208
    • /
    • 2018
  • Comparison of different optimizer performance in photovoltaic power modeling using artificial neural deep learning techniques is described in this paper. Six different deep learning optimizers are tested for Long-Short-Term Memory networks in this study. The optimizers are namely Adam, Stochastic Gradient Descent, Root Mean Square Propagation, Adaptive Gradient, and some variants such as Adamax and Nadam. For comparing the optimization techniques, high and low fluctuated photovoltaic power output are examined and the power output is real data obtained from the site at Mokpo university. Using Python Keras version, we have developed the prediction program for the performance evaluation of the optimizations. The prediction error results of each optimizer in both high and low power cases shows that the Adam has better performance compared to the other optimizers.

Identification of Dynamic Systems Using a Self Recurrent Wavelet Neural Network: Convergence Analysis Via Adaptive Learning Rates (자기 회귀 웨이블릿 신경 회로망을 이용한 다이나믹 시스템의 동정: 적응 학습률 기반 수렴성 분석)

  • Yoo, Sung-Jin;Choi, Yoon-Ho;Park, Jin-Bae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.9
    • /
    • pp.781-788
    • /
    • 2005
  • This paper proposes an identification method using a self recurrent wavelet neural network (SRWNN) for dynamic systems. The architecture of the proposed SRWNN is a modified model of the wavelet neural network (WNN). But, unlike the WNN, since a mother wavelet layer of the SRWNN is composed of self-feedback neurons, the SRWNN has the ability to store the past information of the wavelet. Thus, in the proposed identification architecture, the SRWNN is used for identifying nonlinear dynamic systems. The gradient descent method with adaptive teaming rates (ALRs) is applied to 1.am the parameters of the SRWNN identifier (SRWNNI). The ALRs are derived from the discrete Lyapunov stability theorem, which are used to guarantee the convergence of an SRWNNI. Finally, through computer simulations, we demonstrate the effectiveness of the proposed SRWNNI.

Performance Comparison of Logistic Regression Algorithms on RHadoop

  • Jung, Byung Ho;Lim, Dong Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.4
    • /
    • pp.9-16
    • /
    • 2017
  • Machine learning has found widespread implementations and applications in many different domains in our life. Logistic regression is a type of classification in machine leaning, and is used widely in many fields, including medicine, economics, marketing and social sciences. In this paper, we present the MapReduce implementation of three existing algorithms, this is, Gradient Descent algorithm, Cost Minimization algorithm and Newton-Raphson algorithm, for logistic regression on RHadoop that integrates R and Hadoop environment applicable to large scale data. We compare the performance of these algorithms for estimation of logistic regression coefficients with real and simulated data sets. We also compare the performance of our RHadoop and RHIPE platforms. The performance experiments showed that our Newton-Raphson algorithm when compared to Gradient Descent and Cost Minimization algorithms appeared to be better to all data tested, also showed that our RHadoop was better than RHIPE in real data, and was opposite in simulated data.

Human Face Recognition used Improved Back-Propagation (BP) Neural Network

  • Zhang, Ru-Yang;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.4
    • /
    • pp.471-477
    • /
    • 2018
  • As an important key technology using on electronic devices, face recognition has become one of the hottest technology recently. The traditional BP Neural network has a strong ability of self-learning, adaptive and powerful non-linear mapping but it also has disadvantages such as slow convergence speed, easy to be traversed in the training process and easy to fall into local minimum points. So we come up with an algorithm based on BP neural network but also combined with the PCA algorithm and other methods such as the elastic gradient descent method which can improve the original network to try to improve the whole recognition efficiency and has the advantages of both PCA algorithm and BP neural network.

Nonlinear Adaptive PID Controller Desist based on an Immune Feedback Mechanism and a Gradient Descent Learning (면역 피드백 메카니즘과 경사감소학습에 기초한 비선형 적응 PID 제어기 설계)

  • 박진현;최영규
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2002.12a
    • /
    • pp.113-117
    • /
    • 2002
  • PID controllers, which have been widely used in industry, have a simple structure and robustness to modeling error. But it is difficult to have uniformly good control performance in system parameters variation or different velocity command. In this paper, we propose a nonlinear adaptive PR controller based on an Immune feedback mechanism and a gradient descent teaming. This algorithm has a simple structure and robustness to system parameters variation. To verify performances of the proposed nonlinear adaptive PID controller, the speed control of nonlinear DC motor Is peformed. The simulation results show that the proposed control systems are effective in tracking a command velocity under system parameters variation

L1-penalized AUC-optimization with a surrogate loss

  • Hyungwoo Kim;Seung Jun Shin
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.203-212
    • /
    • 2024
  • The area under the ROC curve (AUC) is one of the most common criteria used to measure the overall performance of binary classifiers for a wide range of machine learning problems. In this article, we propose a L1-penalized AUC-optimization classifier that directly maximizes the AUC for high-dimensional data. Toward this, we employ the AUC-consistent surrogate loss function and combine the L1-norm penalty which enables us to estimate coefficients and select informative variables simultaneously. In addition, we develop an efficient optimization algorithm by adopting k-means clustering and proximal gradient descent which enjoys computational advantages to obtain solutions for the proposed method. Numerical simulation studies demonstrate that the proposed method shows promising performance in terms of prediction accuracy, variable selectivity, and computational costs.

Privacy Preserving Techniques for Deep Learning in Multi-Party System (멀티 파티 시스템에서 딥러닝을 위한 프라이버시 보존 기술)

  • Hye-Kyeong Ko
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.647-654
    • /
    • 2023
  • Deep Learning is a useful method for classifying and recognizing complex data such as images and text, and the accuracy of the deep learning method is the basis for making artificial intelligence-based services on the Internet useful. However, the vast amount of user da vita used for training in deep learning has led to privacy violation problems, and it is worried that companies that have collected personal and sensitive data of users, such as photographs and voices, own the data indefinitely. Users cannot delete their data and cannot limit the purpose of use. For example, data owners such as medical institutions that want to apply deep learning technology to patients' medical records cannot share patient data because of privacy and confidentiality issues, making it difficult to benefit from deep learning technology. In this paper, we have designed a privacy preservation technique-applied deep learning technique that allows multiple workers to use a neural network model jointly, without sharing input datasets, in multi-party system. We proposed a method that can selectively share small subsets using an optimization algorithm based on modified stochastic gradient descent, confirming that it could facilitate training with increased learning accuracy while protecting private information.

Implementation of Speed Sensorless Induction Motor drives by Fast Learning Neural Network using RLS Approach

  • Kim, Yoon-Ho;Kook, Yoon-Sang
    • Proceedings of the KIPE Conference
    • /
    • 1998.10a
    • /
    • pp.293-297
    • /
    • 1998
  • This paper presents a newly developed speed sensorless drive using RLS based on Neural Network Training Algorithm. The proposed algorithm has just the time-varying learning rate, while the wellknown back-propagation algorithm based on gradient descent has a constant learning rate. The number of iterations required by the new algorithm to converge is less than that of the back-propagation algorithm. The theoretical analysis and experimental results to verify the effectiveness of the proposed control strategy are described.

  • PDF