• Title/Summary/Keyword: error back-propagation learning algorithm

Search Result 150, Processing Time 0.028 seconds

Time Series Prediction Using a Multi-layer Neural Network with Low Pass Filter Characteristics (저주파 필터 특성을 갖는 다층 구조 신경망을 이용한 시계열 데이터 예측)

  • Min-Ho Lee
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.21 no.1
    • /
    • pp.66-70
    • /
    • 1997
  • In this paper a new learning algorithm for curvature smoothing and improved generalization for multi-layer neural networks is proposed. To enhance the generalization ability a constraint term of hidden neuron activations is added to the conventional output error, which gives the curvature smoothing characteristics to multi-layer neural networks. When the total cost consisted of the output error and hidden error is minimized by gradient-descent methods, the additional descent term gives not only the Hebbian learning but also the synaptic weight decay. Therefore it incorporates error back-propagation, Hebbian, and weight decay, and additional computational requirements to the standard error back-propagation is negligible. From the computer simulation of the time series prediction with Santafe competition data it is shown that the proposed learning algorithm gives much better generalization performance.

  • PDF

Self-Relaxation for Multilayer Perceptron

  • Liou, Cheng-Yuan;Chen, Hwann-Txong
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.113-117
    • /
    • 1998
  • We propose a way to show the inherent learning complexity for the multilayer perceptron. We display the solution space and the error surfaces on the input space of a single neuron with two inputs. The evolution of its weights will follow one of the two error surfaces. We observe that when we use the back-propagation(BP) learning algorithm (1), the wight cam not jump to the lower error surface due to the implicit continuity constraint on the changes of weight. The self-relaxation approach is to explicity find out the best combination of all neurons' two error surfaces. The time complexity of training a multilayer perceptron by self-relaxationis exponential to the number of neurons.

  • PDF

Adaptive Learning Rate and Limited Error Signal to Reduce the Sensitivity of Error Back-Propagation Algorithm on the n-th Order Cross-Entropy Error (오류 역전파 알고리즘의 n차 크로스-엔트로피 오차신호에 대한 민감성 제거를 위한 가변 학습률 및 제한된 오차신호)

  • 오상훈;이수영
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.6
    • /
    • pp.67-75
    • /
    • 1998
  • Although the nCE(n-th order cross-entropy) error function resolves the incorrect saturation problem of conventional EBP(error back-propagation) algorithm, the performance of MLP's (multilayer perceptrons) trained using the nCE function depends heavily on the order of the nCE function. In this paper, we propose an adaptive learning rate to make the performance of MLP's insensitive to the order of the nCE error. Additionally, we propose a limited error signal of output node to prevent unstable learning due to the adaptive learning rate. The effectiveness of the proposed method is demonstrated in simulations of handwritten digit recognition and thyroid diagnosis tasks.

  • PDF

Active Control of Sound in a Duct System by Back Propagation Algorithm (역전파 알고리즘에 의한 덕트내 소음의 능동제어)

  • Shin, Joon;Kim, Heung-Seob;Oh, Jae-Eung
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.18 no.9
    • /
    • pp.2265-2271
    • /
    • 1994
  • With the improvement of standard of living, requirement for comfortable and quiet environment has been increased and, therefore, there has been a many researches for active noise reduction to overcome the limit of passive control method. In this study, active noise control is performed in a duct system using intelligent control technique which needs not decide the coefficients of high order filter and the mathematical modeling of a system. Back propagation algorithm is applied as an intelligent control technique and control system is organized to exclude the error microphone and high speed operational device which are indispensable for conventional active noise control techniques. Furthermore, learning is performed by organizing acoustic feedback model, and the effect of the proposed control technique is verified via computer simulation and experiment of active noise control in a duct system.

On the Configuration of initial weight value for the Adaptive back propagation neural network (적응 역 전파 신경회로망의 초기 연철강도 설정에 관한 연구)

  • 홍봉화
    • The Journal of Information Technology
    • /
    • v.4 no.1
    • /
    • pp.71-79
    • /
    • 2001
  • This paper presents an adaptive back propagation algorithm that update the learning parameter by the generated error, adaptively and configuration of the range for the initial connecting weight according to the different maximum target value from minimum target value. This algorithm is expected to escaping from the local minimum and make the best environment for the convergence. On the simulation tested this algorithm on three learning pattern. The first was 3-parity problem learning, the second was $7{\times}5$ dot alphabetic font learning and the third was handwritten primitive strokes learning. In three examples, the probability of becoming trapped in local minimum was reduce. Furthermore, in the alphabetic font and handwritten primitive strokes learning, the neural network enhanced to loaming efficient about 27%~57.2% for the standard back propagation(SBP).

  • PDF

A Simple Approach of Improving Back-Propagation Algorithm

  • Zhu, H.;Eguchi, K.;Tabata, T.;Sun, N.
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.1041-1044
    • /
    • 2000
  • The enhancement to the back-propagation algorithm presented in this paper has resulted from the need to extract sparsely connected networks from networks employing product terms. The enhancement works in conjunction with the back-propagation weight update process, so that the actions of weight zeroing and weight stimulation enhance each other. It is shown that the error measure, can also be interpreted as rate of weight change (as opposed to ${\Delta}W_{ij}$), and consequently used to determine when weights have reached a stable state. Weights judged to be stable are then compared to a zero weight threshold. Should they fall below this threshold, then the weight in question is zeroed. Simulation of such a system is shown to return improved learning rates and reduce network connection requirements, with respect to the optimal network solution, trained using the normal back-propagation algorithm for Multi-Layer Perceptron (MLP), Higher Order Neural Network (HONN) and Sigma-Pi networks.

  • PDF

Improving the Error Back-Propagation Algorithm of Multi-Layer Perceptrons with a Modified Error Function (역전파 학습의 오차함수 개선에 의한 다층퍼셉트론의 학습성능 향상)

  • 오상훈;이영직
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.6
    • /
    • pp.922-931
    • /
    • 1995
  • In this paper, we propose a modified error function to improve the EBP(Error Back-Propagation) algorithm of Multi-Layer Perceptrons. Using the modified error function, the output node of MLP generates a strong error signal in the case that the output node is far from the desired value, and generates a weak error signal in the opposite case. This accelerates the learning speed of EBP algorothm in the initial stage and prevents overspecialization for training patterns in the final stage. The effectiveness of our modification is verified through the simulation of handwritten digit recognition.

  • PDF

Speeding-up for error back-propagation algorithm using micro-genetic algorithms (미소-유전 알고리듬을 이용한 오류 역전파 알고리듬의 학습 속도 개선 방법)

  • 강경운;최영길;심귀보;전홍태
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10a
    • /
    • pp.853-858
    • /
    • 1993
  • The error back-propagation(BP) algorithm is widely used for finding optimum weights of multi-layer neural networks. However, the critical drawback of the BP algorithm is its slow convergence of error. The major reason for this slow convergence is the premature saturation which is a phenomenon that the error of a neural network stays almost constant for some period time during learning. An inappropriate selections of initial weights cause each neuron to be trapped in the premature saturation state, which brings in slow convergence speed of the multi-layer neural network. In this paper, to overcome the above problem, Micro-Genetic algorithms(.mu.-GAs) which can allow to find the near-optimal values, are used to select the proper weights and slopes of activation function of neurons. The effectiveness of the proposed algorithms will be demonstrated by some computer simulations of two d.o.f planar robot manipulator.

  • PDF

A Study on Face Recognition using a Hybrid GA-BP Algorithm (혼합된 GA-BP 알고리즘을 이용한 얼굴 인식 연구)

  • Jeon, Ho-Sang;Namgung, Jae-Chan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2
    • /
    • pp.552-557
    • /
    • 2000
  • In the paper, we proposed a face recognition method that uses GA-BP(Genetic Algorithm-Back propagation Network) that optimizes initial parameters such as bias values or weights. Each pixel in the picture is used for input of the neuralnetwork. The initial weights of neural network is consist of fixed-point real values and converted to bit string on purpose of using the individuals that arte expressed in the Genetic Algorithm. For the fitness value, we defined the value that shows the lowest error of neural network, which is evaluated using newly defined adaptive re-learning operator and built the optimized and most advanced neural network. Then we made experiments on the face recognition. In comparison with learning convergence speed, the proposed algorithm shows faster convergence speed than solo executed back propagation algorithm and provides better performance, about 2.9% in proposed method than solo executed back propagation algorithm.

  • PDF

Searching a global optimum by stochastic perturbation in error back-propagation algorithm (오류 역전파 학습에서 확률적 가중치 교란에 의한 전역적 최적해의 탐색)

  • 김삼근;민창우;김명원
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.3
    • /
    • pp.79-89
    • /
    • 1998
  • The Error Back-Propagation(EBP) algorithm is widely applied to train a multi-layer perceptron, which is a neural network model frequently used to solve complex problems such as pattern recognition, adaptive control, and global optimization. However, the EBP is basically a gradient descent method, which may get stuck in a local minimum, leading to failure in finding the globally optimal solution. Moreover, a multi-layer perceptron suffers from locking a systematic determination of the network structure appropriate for a given problem. It is usually the case to determine the number of hidden nodes by trial and error. In this paper, we propose a new algorithm to efficiently train a multi-layer perceptron. OUr algorithm uses stochastic perturbation in the weight space to effectively escape from local minima in multi-layer perceptron learning. Stochastic perturbation probabilistically re-initializes weights associated with hidden nodes to escape a local minimum if the probabilistically re-initializes weights associated with hidden nodes to escape a local minimum if the EGP learning gets stuck to it. Addition of new hidden nodes also can be viewed asa special case of stochastic perturbation. Using stochastic perturbation we can solve the local minima problem and the network structure design in a unified way. The results of our experiments with several benchmark test problems including theparity problem, the two-spirals problem, andthe credit-screening data show that our algorithm is very efficient.

  • PDF