• Title/Summary/Keyword: Delta learning

Search Result 89, Processing Time 0.022 seconds

Learning Performance Improvement of Fuzzy RBF Network (퍼지 RBF 네트워크의 학습 성능 개선)

  • Kim Kwang-Baek
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.3
    • /
    • pp.369-376
    • /
    • 2006
  • In this paper, we propose an improved fuzzy RBF network which dynamically adjusts the rate of learning by applying the Delta-bar-Delta algorithm in order to improve the learning performance of fuzzy RBF networks. The proposed learning algorithm, which combines the fuzzy C-Means algorithm with the generalized delta learning method, improves its learning performance by dynamically adjusting the rate of learning. The adjustment of the learning rate is achieved by self-generating middle-layered nodes and by applying the Delta-bar-Delta algorithm to the generalized delta learning method for the learning of middle and output layers. To evaluate the learning performance of the proposed RBF network, we used 40 identifiers extracted from a container image as the training data. Our experimental results show that the proposed method consumes less training time and improves the convergence of teaming, compared to the conventional ART2-based RBF network and fuzzy RBF network.

  • PDF

Auto-Tuning Method of Learning Rate for Performance Improvement of Backpropagation Algorithm (역전파 알고리즘의 성능개선을 위한 학습율 자동 조정 방식)

  • Kim, Joo-Woong;Jung, Kyung-Kwon;Eom, Ki-Hwan
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.4
    • /
    • pp.19-27
    • /
    • 2002
  • We proposed an auto-tuning method of learning rate for performance improvement of backpropagation algorithm. Proposed method is used a fuzzy logic system for automatic tuning of learning rate. Instead of choosing a fixed learning rate, the fuzzy logic system is used to dynamically adjust learning rate. The inputs of fuzzy logic system are ${\Delta}$ and $\bar{{\Delta}}$, and the output is the learning rate. In order to verify the effectiveness of the proposed method, we performed simulations on a N-parity problem, function approximation, and Arabic numerals classification. The results show that the proposed method has considerably improved the performance compared to the backpropagation, the backpropagation with momentum, and the Jacobs' delta-bar-delta.

Performance Improvement of Backpropagation Algorithm by Automatic Tuning of Learning Rate using Fuzzy Logic System

  • Jung, Kyung-Kwon;Lim, Joong-Kyu;Chung, Sung-Boo;Eom, Ki-Hwan
    • Journal of information and communication convergence engineering
    • /
    • v.1 no.3
    • /
    • pp.157-162
    • /
    • 2003
  • We propose a learning method for improving the performance of the backpropagation algorithm. The proposed method is using a fuzzy logic system for automatic tuning of the learning rate of each weight. Instead of choosing a fixed learning rate, the fuzzy logic system is used to dynamically adjust the learning rate. The inputs of fuzzy logic system are delta and delta bar, and the output of fuzzy logic system is the learning rate. In order to verify the effectiveness of the proposed method, we performed simulations on the XOR problem, character classification, and function approximation. The results show that the proposed method considerably improves the performance compared to the general backpropagation, the backpropagation with momentum, and the Jacobs'delta-bar-delta algorithm.

Optimal Heating Load Identification using a DRNN (DRNN을 이용한 최적 난방부하 식별)

  • Chung, Kee-Chull;Yang, Hai-Won
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.10
    • /
    • pp.1231-1238
    • /
    • 1999
  • This paper presents an approach for the optimal heating load Identification using Diagonal Recurrent Neural Networks(DRNN). In this paper, the DRNN captures the dynamic nature of a system and since it is not fully connected, training is much faster than a fully connected recurrent neural network. The architecture of DRNN is a modified model of the fully connected recurrent neural network with one hidden layer. The hidden layer is comprised of self-recurrent neurons, each feeding its output only into itself. In this study, A dynamic backpropagation (DBP) with delta-bar-delta learning method is used to train an optimal heating load identifier. Delta-bar-delta learning method is an empirical method to adapt the learning rate gradually during the training period in order to improve accuracy in a short time. The simulation results based on experimental data show that the proposed model is superior to the other methods in most cases, in regard of not only learning speed but also identification accuracy.

  • PDF

Enhanced RBF Network by Using Auto- Turning Method of Learning Rate, Momentum and ART2

  • Kim, Kwang-baek;Moon, Jung-wook
    • Proceedings of the KAIS Fall Conference
    • /
    • 2003.11a
    • /
    • pp.84-87
    • /
    • 2003
  • This paper proposes the enhanced REF network, which arbitrates learning rate and momentum dynamically by using the fuzzy system, to arbitrate the connected weight effectively between the middle layer of REF network and the output layer of REF network. ART2 is applied to as the learning structure between the input layer and the middle layer and the proposed auto-turning method of arbitrating the learning rate as the method of arbitrating the connected weight between the middle layer and the output layer. The enhancement of proposed method in terms of learning speed and convergence is verified as a result of comparing it with the conventional delta-bar-delta algorithm and the REF network on the basis of the ART2 to evaluate the efficiency of learning of the proposed method.

  • PDF

A Study on Enhanced Self-Generation Supervised Learning Algorithm for Image Recognition (영상 인식을 위한 개선된 자가 생성 지도 학습 알고리듬에 관한 연구)

  • Kim, Tae-Kyung;Kim, Kwang-Baek;Paik, Joon-Ki
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.2C
    • /
    • pp.31-40
    • /
    • 2005
  • we propose an enhanced self-generation supervised algorithm that by combining an ART algorithm and the delta-bar-delta method. Form the input layer to the hidden layer, ART-1 and ART-2 are used to produce nodes, respectively. A winner-take-all method is adopted to the connection weight adaption so that a stored pattern for some pattern is updated. we test the recognition of student identification, a certificate of residence, and an identifier from container that require nodes of hidden layers in neural network. In simulation results, the proposed self-generation supervised learning algorithm reduces the possibility of local minima and improves learning speed and paralysis than conventional neural networks.

Enhanced Backpropagation Algorithm by Auto-Tuning Method of Learning Rate using Fuzzy Control System (퍼지 제어 시스템을 이용한 학습률 자동 조정 방법에 의한 개선된 역전파 알고리즘)

  • 김광백;박충식
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.2
    • /
    • pp.464-470
    • /
    • 2004
  • We propose an enhanced backpropagation algorithm by auto-tuning of learning rate using fuzzy control system for performance improvement of backpropagation algorithm. We propose two methods, which improve local minima and loaming times problem. First, if absolute value of difference between target and actual output value is smaller than $\varepsilon$ or the same, we define it as correctness. And if bigger than $\varepsilon$, we define it as incorrectness. Second, instead of choosing a fixed learning rate, the proposed method is used to dynamically adjust learning rate using fuzzy control system. The inputs of fuzzy control system are number of correctness and incorrectness, and the output is the Loaming rate. For the evaluation of performance of the proposed method, we applied the XOR problem and numeral patterns classification The experimentation results showed that the proposed method has improved the performance compared to the conventional backpropagatiot the backpropagation with momentum, and the Jacob's delta-bar-delta method.

Heart Attack Prediction using Neural Network and Different Online Learning Methods

  • Antar, Rayana Khaled;ALotaibi, Shouq Talal;AlGhamdi, Manal
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.6
    • /
    • pp.77-88
    • /
    • 2021
  • Heart Failure represents a critical pathological case that is challenging to predict and discover at an early age, with a notable increase in morbidity and mortality. Machine Learning and Neural Network techniques play a crucial role in predicting heart attacks, diseases and more. These techniques give valuable perspectives for clinicians who may then adjust their diagnosis for each individual patient. This paper evaluated neural network models for heart attacks predictions. Several online learning methods were investigated to automatically and accurately predict heart attacks. The UCI dataset was used in this work to train and evaluate First Order and Second Order Online Learning methods; namely Backpropagation, Delta bar Delta, Levenberg Marquardt and QuickProp learning methods. An optimizer technique was also used to minimize the random noise in the database. A regularization concept was employed to further improve the generalization of the model. Results show that a three layers' NN model with a Backpropagation algorithm and Nadam optimizer achieved a promising accuracy for the heart attach prediction tasks.

Coordinate Calibration of the ODVS using Delta-bar-Delta Neural Network (Delta-bar-Delta 알고리즘을 이용한 ODVS의 좌표 교정)

  • Kim Do-Hyeon;Park Young-Min;Cha Eui-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.3
    • /
    • pp.669-675
    • /
    • 2005
  • This paper proposes coordinates transformation and calibration algorithm using 3D parabolic coordinate transformation and delta-bar-delta neural algorithm for the omni-directional image captured by catadioptric camera. Experimental results shows that the proposed algorithm has accuracy and confidence in coordinate transformation which is sensitive to environmental variables.

Variation of activation functions for accelerating the learning speed of the multilayer neural network (다층 구조 신경회로망의 학습 속도 향상을 위한 활성화 함수의 변화)

  • Lee, Byung-Do;Lee, Min-Ho
    • Journal of Sensor Science and Technology
    • /
    • v.8 no.1
    • /
    • pp.45-52
    • /
    • 1999
  • In this raper, an enhanced learning method is proposed for improving the learning speed of the error back propagation learning algorithm. In order to cope with the premature saturation phenomenon at the initial learning stage, a variation scheme of active functions is introduced by using higher order functions, which does not need much increase of computation load. It naturally changes the learning rate of inter-connection weights to a large value as the derivative of sigmoid function abnormally decrease to a small value during the learning epoch. Also, we suggest the hybrid learning method incorporated the proposed method with the momentum training algorithm. Computer simulation results show that the proposed learning algorithm outperforms the conventional methods such as momentum and delta-bar-delta algorithms.

  • PDF