• Title/Summary/Keyword: EBP(Error Back Propagation)

Search Result 27, Processing Time 0.025 seconds

A Design Method for Error Backpropagation neural networks using Voronoi Diagram (보로노이 공간분류를 이용한 오류 역전파 신경망의 설계방법)

  • 김홍기
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.9 no.5
    • /
    • pp.490-495
    • /
    • 1999
  • In this paper. a learning method VoD-EBP for neural networks is proposed, which learn patterns by error back propagation. Based on Voronoi diagram, the method initializes the weights of the neural networks systematically, wh~ch results in faster learning speed and alleviated local optimum problem. The method also shows better the reliability of the design of neural network because proper number of hidden nodes are determined from the analysis of Voronoi diagram. For testing the performance, this paper shows the results of solving the XOR problem and the parity problem. The results were showed faster learning speed than ordinary error back propagation algorithm. In solving the problem, local optimum problems have not been observed.

  • PDF

Improving the Error Back-Propagation Algorithm of Multi-Layer Perceptrons with a Modified Error Function (역전파 학습의 오차함수 개선에 의한 다층퍼셉트론의 학습성능 향상)

  • 오상훈;이영직
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.6
    • /
    • pp.922-931
    • /
    • 1995
  • In this paper, we propose a modified error function to improve the EBP(Error Back-Propagation) algorithm of Multi-Layer Perceptrons. Using the modified error function, the output node of MLP generates a strong error signal in the case that the output node is far from the desired value, and generates a weak error signal in the opposite case. This accelerates the learning speed of EBP algorothm in the initial stage and prevents overspecialization for training patterns in the final stage. The effectiveness of our modification is verified through the simulation of handwritten digit recognition.

  • PDF

A Modified Error Function to Improve the Error Back-Propagation Algorithm for Multi-Layer Perceptrons

  • Oh, Sang-Hoon;Lee, Young-Jik
    • ETRI Journal
    • /
    • v.17 no.1
    • /
    • pp.11-22
    • /
    • 1995
  • This paper proposes a modified error function to improve the error back-propagation (EBP) algorithm for multi-Layer perceptrons (MLPs) which suffers from slow learning speed. It can also suppress over-specialization for training patterns that occurs in an algorithm based on a cross-entropy cost function which markedly reduces learning time. In the similar way as the cross-entropy function, our new function accelerates the learning speed of the EBP algorithm by allowing the output node of the MLP to generate a strong error signal when the output node is far from the desired value. Moreover, it prevents the overspecialization of learning for training patterns by letting the output node, whose value is close to the desired value, generate a weak error signal. In a simulation study to classify handwritten digits in the CEDAR [1] database, the proposed method attained 100% correct classification for the training patterns after only 50 sweeps of learning, while the original EBP attained only 98.8% after 500 sweeps. Also, our method shows mean-squared error of 0.627 for the test patterns, which is superior to the error 0.667 in the cross-entropy method. These results demonstrate that our new method excels others in learning speed as well as in generalization.

  • PDF

Comparative Analysis on Error Back Propagation Learning and Layer By Layer Learning in Multi Layer Perceptrons (다층퍼셉트론의 오류역전파 학습과 계층별 학습의 비교 분석)

  • 곽영태
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.5
    • /
    • pp.1044-1051
    • /
    • 2003
  • This paper surveys the EBP(Error Back Propagation) learning, the Cross Entropy function and the LBL(Layer By Layer) learning, which are used for learning the MLP(Multi Layer Perceptrons). We compare the merits and demerits of each learning method in the handwritten digit recognition. Although the speed of EBP learning is slower than other learning methods in the initial learning process, its generalization capability is better. Also, the speed of Cross Entropy function that makes up for the weak points of EBP learning is faster than that of EBP learning. But its generalization capability is worse because the error signal of the output layer trains the target vector linearly. The speed of LBL learning is the fastest speed among the other learning methods in the initial learning process. However, it can't train for more after a certain time, it has the lowest generalization capability. Therefore, this paper proposes the standard of selecting the learning method when we apply the MLP.

A study on the improvement of the EBP learning speed using an acceleration algorithm (가속화 알고리즘을 이용한 EBP의 학습 속도의 개선에 관한 연구)

  • Choi, Hee-Chang;Kwon, Hee-Yong;Hwang, Hee-Yeung
    • Proceedings of the KIEE Conference
    • /
    • 1989.07a
    • /
    • pp.457-460
    • /
    • 1989
  • In this paper, an improvement of the EBP(error back propagation) learning speed using an acceleration algorithm is described. Using an acceleration algorithm known as the Partan method in the gradient search algorithm, learning speed is 25% faster than the original EBP algorithm in the simulaion results.

  • PDF

Searching a global optimum by stochastic perturbation in error back-propagation algorithm (오류 역전파 학습에서 확률적 가중치 교란에 의한 전역적 최적해의 탐색)

  • 김삼근;민창우;김명원
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.3
    • /
    • pp.79-89
    • /
    • 1998
  • The Error Back-Propagation(EBP) algorithm is widely applied to train a multi-layer perceptron, which is a neural network model frequently used to solve complex problems such as pattern recognition, adaptive control, and global optimization. However, the EBP is basically a gradient descent method, which may get stuck in a local minimum, leading to failure in finding the globally optimal solution. Moreover, a multi-layer perceptron suffers from locking a systematic determination of the network structure appropriate for a given problem. It is usually the case to determine the number of hidden nodes by trial and error. In this paper, we propose a new algorithm to efficiently train a multi-layer perceptron. OUr algorithm uses stochastic perturbation in the weight space to effectively escape from local minima in multi-layer perceptron learning. Stochastic perturbation probabilistically re-initializes weights associated with hidden nodes to escape a local minimum if the probabilistically re-initializes weights associated with hidden nodes to escape a local minimum if the EGP learning gets stuck to it. Addition of new hidden nodes also can be viewed asa special case of stochastic perturbation. Using stochastic perturbation we can solve the local minima problem and the network structure design in a unified way. The results of our experiments with several benchmark test problems including theparity problem, the two-spirals problem, andthe credit-screening data show that our algorithm is very efficient.

  • PDF

Hydrological Modelling of Water Level near "Hahoe Village" Based on Multi-Layer Perceptron

  • Oh, Sang-Hoon;Wakuya, Hiroshi
    • International Journal of Contents
    • /
    • v.12 no.1
    • /
    • pp.49-53
    • /
    • 2016
  • "Hahoe Village" in Andong region is an UNESCO World Heritage Site. It should be protected against various disasters such as fire, flooding, earthquake, etc. Among these disasters, flooding has drastic impact on the lives and properties in a wide area. Since "Hahoe Village" is adjacent to Nakdong River, it is important to monitor the water level near the village. In this paper, we developed a hydrological modelling using multi-layer perceptron (MLP) to predict the water level of Nakdong River near "Hahoe Village". To develop the prediction model, error back-propagation (EBP) algorithm was used to train the MLP with water level data near the village and rainfall data at the upper reaches of the village. After training with data in 2012 and 2013, we verified the prediction performance of MLP with untrained data in 2014.

Efficient Learning Algorithm using Structural Hybrid of Multilayer Neural Networks and Gaussian Potential Function Networks (다층 신경회로망과 가우시안 포텐샬 함수 네트워크의 구조적 결합을 이용한 효율적인 학습 방법)

  • 박상봉;박래정;박철훈
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.12
    • /
    • pp.2418-2425
    • /
    • 1994
  • Although the error backpropagation(EBP) algorithm based on the gradient descent method is a widely-used learning algorithm of neural networks, learning sometimes takes a long time to acquire accuracy. This paper develops a novel learning method to alleviate the problems of EBP algorithm such as local minima, slow speed, and size of structure and thus to improve performance by adopting other new networks. Gaussian Potential Function networks(GPFN), in parallel with multilayer neural networks. Empirical simulations show the efficacy of the proposed algorithm in function approximation, which enables us to train networks faster with the better generalization capabilities.

  • PDF

Adaptive Learning Rate and Limited Error Signal to Reduce the Sensitivity of Error Back-Propagation Algorithm on the n-th Order Cross-Entropy Error (오류 역전파 알고리즘의 n차 크로스-엔트로피 오차신호에 대한 민감성 제거를 위한 가변 학습률 및 제한된 오차신호)

  • 오상훈;이수영
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.6
    • /
    • pp.67-75
    • /
    • 1998
  • Although the nCE(n-th order cross-entropy) error function resolves the incorrect saturation problem of conventional EBP(error back-propagation) algorithm, the performance of MLP's (multilayer perceptrons) trained using the nCE function depends heavily on the order of the nCE function. In this paper, we propose an adaptive learning rate to make the performance of MLP's insensitive to the order of the nCE error. Additionally, we propose a limited error signal of output node to prevent unstable learning due to the adaptive learning rate. The effectiveness of the proposed method is demonstrated in simulations of handwritten digit recognition and thyroid diagnosis tasks.

  • PDF

Multilayer Neural Network Using Delta Rule: Recognitron III (텔타규칙을 이용한 다단계 신경회로망 컴퓨터:Recognitron III)

  • 김춘석;박충규;이기한;황희영
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.40 no.2
    • /
    • pp.224-233
    • /
    • 1991
  • The multilayer expanson of single layer NN (Neural Network) was needed to solve the linear seperability problem as shown by the classic example using the XOR function. The EBP (Error Back Propagation ) learning rule is often used in multilayer Neural Networks, but it is not without its faults: 1)D.Rimmelhart expanded the Delta Rule but there is a problem in obtaining Ca from the linear combination of the Weight matrix N between the hidden layer and the output layer and H, wich is the result of another linear combination between the input pattern and the Weight matrix M between the input layer and the hidden layer. 2) Even if using the difference between Ca and Da to adjust the values of the Weight matrix N between the hidden layer and the output layer may be valid is correct, but using the same value to adjust the Weight matrixd M between the input layer and the hidden layer is wrong. Recognitron III was proposed to solve these faults. According to simulation results, since Recognitron III does not learn the three layer NN itself, but divides it into several single layer NNs and learns these with learning patterns, the learning time is 32.5 to 72.2 time faster than EBP NN one. The number of patterns learned in a EBP NN with n input and output cells and n+1 hidden cells are 2**n, but n in Recognitron III of the same size. [5] In the case of pattern generalization, however, EBP NN is less than Recognitron III.

  • PDF