• Title/Summary/Keyword: EBP(Error Back Propagation)

Search Result 27, Processing Time 0.022 seconds

A Design of Reconfigurable Neural Network Processor (재구성 가능한 신경망 프로세서의 설계)

  • 장영진;이현수
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.368-371
    • /
    • 1999
  • In this paper, we propose a neural network processor architecture with on-chip learning and with reconfigurability according to the data dependencies of the algorithm applied. For the neural network model applied, the proposed architecture can be configured into either SIMD or SRA(Systolic Ring Array) without my changing of on-chip configuration so as to obtain a high throughput. However, changing of system configuration can be controlled by user program. To process activation function, which needs amount of cycles to get its value, we design it by using PWL(Piece-Wise Linear) function approximation method. This unit has only single latency and the processing ability of non-linear function such as sigmoid gaussian function etc. And we verified the processing mechanism with EBP(Error Back-Propagation) model.

  • PDF

A Design Method for a New Multi-layer Neural Networks Incorporating Prior Knowledge (사전 정보를 이용한 다층신경망의 설계)

  • 김병호;이지홍
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.11
    • /
    • pp.56-65
    • /
    • 1993
  • This paper presents the design consideration of the MFNNs(Multilayer Feed forward Neural Networks) based on the distribution of the given teching patterns. By extracting the feature points from the given teaching patterns, the structure of a network including the netowrk size and interconnection weights of a network is initialized. This network is trained based on the modified version of the EBP(Error Back Propagation) algorithm. As a result, the proposed method has the advantage of learning speed compared to the conventional learning of the MFNNs with randomly chosen initial weights. To show the effectiveness of the suggested approach, the simulation result on the approximation of a two demensional continuous function is shown.

  • PDF

Image Classificatiion using neural network depending on pattern information quantity (패턴 정보량에 따른 신경망을 이용한 영상분류)

  • Lee, Yun-Jung;Kim, Do-Nyun;Cho, Dong-Sub
    • Proceedings of the KIEE Conference
    • /
    • 1995.07b
    • /
    • pp.959-961
    • /
    • 1995
  • The objective of most image proccessing applications is to extract meaningful information from one or more pictures. It is accomplished efficiently using neural networks, which is used in image classification and image recognition. In neural networks, background and meaningful information are processed with same weight in input layer. In this paper, we propose the image classification method using neural networks, especially EBP(Error Back Propagation). Preprocessing is needed. In preprocessing, background is compressed and meaningful information is emphasized. We use the quadtree approach, which is a hierarchical data structure based on a regular decomposition of space.

  • PDF

Improvement of learning method in pattern classification (패턴분류에서 학습방법 개선)

  • Kim, Myung-Chan;Choi, Chong-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.3 no.6
    • /
    • pp.594-601
    • /
    • 1997
  • A new algorithm is proposed for training the multilayer perceptrion(MLP) in pattern classification problems to accelerate the learning speed. It is shown that the sigmoid activation function of the output node can have deterimental effect on the performance of learning. To overcome this detrimental effect and to use the information fully in supervised learning, an objective function for binary modes is proposed. This objective function is composed with two new output activation functions which are selectively used depending on desired values of training patterns. The effect of the objective function is analyzed and a training algorithm is proposed based on this. Its performance is tested in several examples. Simulation results show that the performance of the proposed method is better than that of the conventional error back propagation (EBP) method.

  • PDF

A Study on Stabilization Control of Inverted Pendulum System using Evolving Neural Network Controller (진화 신경회로망 제어기를 이용한 도립진자 시스템의 안정화 제어에 관한 연구)

  • 김민성;정종원;성상규;박현철;심영진;이준탁
    • Proceedings of the Korean Society of Marine Engineers Conference
    • /
    • 2001.05a
    • /
    • pp.243-248
    • /
    • 2001
  • The stabilization control of Inverted Pendulum(IP) system is difficult because of its nonlinearity and structural unstability. Thus, in this paper, an Evolving Neural Network Controller(ENNC) without Error Back Propagation(EBP) is presented. An ENNC is described simply by genetic representation using an encoding strategy for types and slope values of each active functions, biases, weights and so on. By an evolutionary programming which has three genetic operation; selection, crossover and mutation, the predetermine controller is optimally evolved by updating simultaneously the connection patterns and weights of the neural networks. The performances of the proposed ENNC(PENNC) are compared with the ones of conventional optimal controller and the conventional evolving neural network controller(CENNC) through the simulation and experimental results. And we showed that the finally optimized PENNC was very useful in the stabilization control of an IP system.

  • PDF

A Study on the PTP Motion of Robot Manipulators by Neural Networks (신경 회로망에 의한 로보트 매니퓰레이터의 PTP 운동에 관한 연구)

  • Kyung, Kye-Hyun;Ko, Myoung-Sam;Lee, Bum-Hee
    • Proceedings of the KIEE Conference
    • /
    • 1989.07a
    • /
    • pp.679-684
    • /
    • 1989
  • In this paper, we describe the PTP notion of robot manipulators by neural networks. The PTP motion requires the inverse kinematic redline and the joint trajectory generation algorithm. We use the multi-layered Perceptron neural networks and the Error Back Propagation(EBP) learning rule for inverse kinematic problems. Varying the number of hidden layers and the neurons of each hidden layer, we investigate the performance of the neural networks. Increasing the number of learning sweeps, we also discuss the performance of the neural networks. We propose a method for solving the inverse kinematic problems by adding the error compensation neural networks(ECNN). And, we implement the neural networks proposed by Grossberg et al. for automatic trajectory generation and discuss the problems in detail. Applying the neural networks to the current trajectory generation problems, we can refute the computation time for trajectory generation.

  • PDF

Optimal Synthesis Method for Binary Neural Network using NETLA (NETLA를 이용한 이진 신경회로망의 최적 합성방법)

  • Sung, Sang-Kyu;Kim, Tae-Woo;Park, Doo-Hwan;Jo, Hyun-Woo;Ha, Hong-Gon;Lee, Joon-Tark
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2726-2728
    • /
    • 2001
  • This paper describes an optimal synthesis method of binary neural network(BNN) for an approximation problem of a circular region using a newly proposed learning algorithm[7] Our object is to minimize the number of connections and neurons in hidden layer by using a Newly Expanded and Truncated Learning Algorithm(NETLA) for the multilayer BNN. The synthesis method in the NETLA is based on the extension principle of Expanded and Truncated Learning(ETL) and is based on Expanded Sum of Product (ESP) as one of the boolean expression techniques. And it has an ability to optimize the given BNN in the binary space without any iterative training as the conventional Error Back Propagation(EBP) algorithm[6] If all the true and false patterns are only given, the connection weights and the threshold values can be immediately determined by an optimal synthesis method of the NETLA without any tedious learning. Futhermore, the number of the required neurons in hidden layer can be reduced and the fast learning of BNN can be realized. The superiority of this NETLA to other algorithms was proved by the approximation problem of one circular region.

  • PDF