• Title/Summary/Keyword: Back-propagation learning

검색결과 528건 처리시간 0.022초

A Modified Error Back Propagation Algorithm Adding Neurons to Hidden Layer (은닉층 뉴우런 추가에 의한 역전파 학습 알고리즘)

  • 백준호;김유신;손경식
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • 제29B권4호
    • /
    • pp.58-65
    • /
    • 1992
  • In this paper new back propagation algorithm which adds neurons to hidden layer is proposed. this proposed algorithm is applied to the pattern recognition of written number coupled with back propagation algorithm through omitting redundant learning. Learning rate and recognition rate of the proposed algorithm are compared with those of the conventional back propagation algorithm and the back propagation through omitting redundant learning. The learning rate of proposed algorithm is 4 times as fast as the conventional back propagation algorithm and 2 times as fast as the back propagation through omitting redundant learning. The recognition rate is 96.2% in case of the conventional back propagation algorithm, 96.5% in case of the back propagation through omitting redundant learning and 97.4% in the proposed algorithm.

  • PDF

Estimating Regression Function with $\varepsilon-Insensitive$ Supervised Learning Algorithm

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제15권2호
    • /
    • pp.477-483
    • /
    • 2004
  • One of the major paradigms for supervised learning in neural network community is back-propagation learning. The standard implementations of back-propagation learning are optimal under the assumptions of identical and independent Gaussian noise. In this paper, for regression function estimation, we introduce $\varepsilon-insensitive$ back-propagation learning algorithm, which corresponds to minimizing the least absolute error. We compare this algorithm with support vector machine(SVM), which is another $\varepsilon-insensitive$ supervised learning algorithm and has been very successful in pattern recognition and function estimation problems. For comparison, we consider a more realistic model would allow the noise variance itself to depend on the input variables.

  • PDF

Edge detection method using unbalanced mutation operator in noise image (잡음 영상에서 불균등 돌연변이 연산자를 이용한 효율적 에지 검출)

  • Kim, Su-Jung;Lim, Hee-Kyoung;Seo, Yo-Han;Jung, Chai-Yeoung
    • The KIPS Transactions:PartB
    • /
    • 제9B권5호
    • /
    • pp.673-680
    • /
    • 2002
  • This paper proposes a method for detecting edge using an evolutionary programming and a momentum back-propagation algorithm. The evolutionary programming does not perform crossover operation as to consider reduction of capability of algorithm and calculation cost, but uses selection operator and mutation operator. The momentum back-propagation algorithm uses assistant to weight of learning step when weight is changed at learning step. Because learning rate o is settled as less in last back-propagation algorithm the momentum back-propagation algorithm discard the problem that learning is slow as relative reduction because change rate of weight at each learning step. The method using EP-MBP is batter than GA-BP method in both learning time and detection rate and showed the decreasing learning time and effective edge detection, in consequence.

Improvement of Learning Capabilities in Multilayer Perceptron by Progressively Enlarging the Learning Domain (점진적 학습영역 확장에 의한 다층인식자의 학습능력 향상)

  • 최종호;신성식;최진영
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • 제29B권1호
    • /
    • pp.94-101
    • /
    • 1992
  • The multilayer perceptron, trained by the error back-propagation learning rule, has been known as a mapping network which can represent arbitrary functions. However depending on the complexity of a function and the initial weights of the multilayer perceptron, the error back-propagation learning may fall into a local minimum or a flat area which may require a long learning time or lead to unsuccessful learning. To solve such difficulties in training the multilayer perceptron by standard error back-propagation learning rule, the paper proposes a learning method which progressively enlarges the learning domain from a small area to the entire region. The proposed method is devised from the investigation on the roles of hidden nodes and connection weights in the multilayer perceptron which approximates a function of one variable. The validity of the proposed method was illustrated through simulations for a function of one variable and a function of two variable with many extremal points.

  • PDF

On the enhancement of the learning efficiency of the adaptive back propagation neural network using the generating and adding the hidden layer node (은닉층 노드의 생성추가를 이용한 적응 역전파 신경회로망의 학습능률 향상에 관한 연구)

  • Kim, Eun-Won;Hong, Bong-Wha
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • 제39권2호
    • /
    • pp.66-75
    • /
    • 2002
  • This paper presents an adaptive back propagation algorithm that its able to enhancement for the learning efficiency with updating the learning parameter and varies the number of hidden layer node by the generated error, adaptively. This algorithm is expected to escaping from the local minimum and make the best environment for the convergence of the back propagation neural network. On the simulation tested this algorithm on three learning pattern. One was exclusive-OR learning and the another was 3-parity problem and 7${\times}$5 dot alphabetic font learning. In result that the probability of becoming trapped in local minimum was reduce. Furthermore, the neural network enhanced to learning efficient about 17.6%~64.7% for the existed back propagation. 

A study on the realization of color printed material check using Error Back-Propagation rule (오류 역전파법으로구현한 컬러 인쇄물 검사에 관한 연구)

  • 한희석;이규영
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 한국퍼지및지능시스템학회 1998년도 추계학술대회 학술발표 논문집
    • /
    • pp.560-567
    • /
    • 1998
  • This paper concerned about a imputed color printed material image in camera to decrease noise and distortion by processing median filtering with input image to identical condition. Also this paper proposed the way of compares a normal printed material with an abnormal printed material color tone with trained a learning of the error back-propagation to block classification by extracting five place from identical block(3${\times}$3) of color printed material R, G, B value. As a representative algorithm of multi-layer perceptron the error Back-propagation technique used to solve complex problems. However, the Error Back-propagation is algorithm which basically used a gradient descent method which can be converged to local minimum and the Back Propagation train include problems, and that may converge in a local minimum rather than get a global minimum. The network structure appropriate for a given problem. In this paper, a good result is obtained by improve initial condition and adjust th number of hidden layer to solve the problem of real time process, learning and train.

  • PDF

Acceleration the Convergence and Improving the Learning Accuracy of the Back-Propagation Method (Back-Propagation방법의 수렴속도 및 학습정확도의 개선)

  • 이윤섭;우광방
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • 제39권8호
    • /
    • pp.856-867
    • /
    • 1990
  • In this paper, the convergence and the learning accuracy of the back-propagation (BP) method in neural network are investigated by 1) analyzing the reason for decelerating the convergence of BP method and examining the rapid deceleration of the convergence when the learning is executed on the part of sigmoid activation function with the very small first derivative and 2) proposing the modified logistic activation function by defining, the convergence factor based on the analysis. Learning on the output patterns of binary as well as analog forms are tested by the proposed method. In binary output patter, the test results show that the convergence is accelerated and the learning accuracy is improved, and the weights and thresholds are converged so that the stability of neural network can be enhanced. In analog output patter, the results show that with extensive initial transient phenomena the learning error is decreased according to the convergence factor, subsequently the learning accuracy is enhanced.

  • PDF

On the set up to the Number of Hidden Node of Adaptive Back Propagation Neural Network (적응 역전파 신경회로망의 은닉 층 노드 수 설정에 관한 연구)

  • Hong, Bong-Wha
    • The Journal of Information Technology
    • /
    • 제5권2호
    • /
    • pp.55-67
    • /
    • 2002
  • This paper presents an adaptive back propagation algorithm that update the learning parameter by the generated error, adaptively and varies the number of hidden layer node. This algorithm is expected to escaping from the local minimum and make the best environment for convergence to be change the number of hidden layer node. On the simulation tested this algorithm on two learning pattern. One was exclusive-OR learning and the other was $7{\times}5$ dot alphabetic font learning. In both examples, the probability of becoming trapped in local minimum was reduce. Furthermore, in alphabetic font learning, the neural network enhanced to learning efficient about 41.56%~58.28% for the conventional back propagation. and HNAD(Hidden Node Adding and Deleting) algorithm.

  • PDF

Classification of ECG Arrhythmia Signals Using Back-Propagation Network (역전달 신경회로망을 이용한 심전도 파형의 부정맥 분류)

  • 권오철;최진영
    • Journal of Biomedical Engineering Research
    • /
    • 제10권3호
    • /
    • pp.343-350
    • /
    • 1989
  • A new algorithm classifying ECG Arrhythmia signals using Back-propagation network is proposed. The base-line of ECG signal is detected by high pass filter and probability density function then input data are normalized for learning and classifying. In addition, ECG data are scanned to classify Arrhythmia signal which is hard to find R-wave. A two-layer perceptron with one hidden layer along with error back-propagation learning rule is utilized as an artificial neural network. The proposed algorithm shows outstanding performance under circumstances of amplitude variation, baseline wander and noise contamination.

  • PDF

A neural network with local weight learning and its application to inverse kinematic robot solution (부분 학습구조의 신경회로와 로보트 역 기구학 해의 응용)

  • 이인숙;오세영
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1990년도 한국자동제어학술회의논문집(국내학술편); KOEX, Seoul; 26-27 Oct. 1990
    • /
    • pp.36-40
    • /
    • 1990
  • Conventional back propagation learning is generally characterized by slow and rather inaccurate learning which makes it difficult to use in control applications. A new multilayer perception architecture and its learning algorithm is proposed that consists of a Kohonen front layer followed by a back propagation network. The Kohonen layer selects a subset of the hidden layer neurons for local tuning. This architecture has been tested on the inverse kinematic solution of robot manipulator while demonstrating its fast and accurate learning capabilities.

  • PDF