• Title/Summary/Keyword: Dynamic Backpropagation

Search Result 57, Processing Time 0.02 seconds

A new training method of multilayer neural networks using a hybrid of backpropagation algorithm and dynamic tunneling system (후향전파 알고리즘과 동적터널링 시스템을 조합한 다층신경망의 새로운 학습방법)

  • 조용현
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.4
    • /
    • pp.201-208
    • /
    • 1996
  • This paper proposes an efficient method for improving the training performance of the neural network using a hybrid of backpropagation algorithm and dynamic tunneling system.The backpropagation algorithm, which is the fast gradient descent method, is applied for high-speed optimization. The dynamic tunneling system, which is the deterministic method iwth a tunneling phenomenone, is applied for blobal optimization. Converging to the local minima by using the backpropagation algorithm, the approximate initial point for escaping the local minima is estimated by the pattern classification, and the simulation results show that the performance of proposed method is superior th that of backpropagation algorithm with randomized initial point settings.

  • PDF

Improving the Training Performance of Neural Networks by using Hybrid Algorithm (하이브리드 알고리즘을 이용한 신경망의 학습성능 개선)

  • Kim, Weon-Ook;Cho, Yong-Hyun;Kim, Young-Il;Kang, In-Ku
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.11
    • /
    • pp.2769-2779
    • /
    • 1997
  • This Paper Proposes an efficient method for improving the training performance of the neural networks using a hybrid of conjugate gradient backpropagation algorithm and dynamic tunneling backpropagation algorithm The conjugate gradient backpropagation algorithm, which is the fast gradient algorithm, is applied for high speed optimization. The dynamic tunneling backpropagation algorithm, which is the deterministic method with tunneling phenomenon, is applied for global optimization. Conversing to the local minima by using the conjugate gradient backpropagation algorithm, the new initial point for escaping the local minima is estimated by dynamic tunneling backpropagation algorithm. The proposed method has been applied to the parity check and the pattern classification. The simulation results show that the performance of proposed method is superior to those of gradient descent backpropagtion algorithm and a hybrid of gradient descent and dynamic tunneling backpropagation algorithm, and the new algorithm converges more often to the global minima than gradient descent backpropagation algorithm.

  • PDF

A study on the Adaptive Controller with Chaotic Dynamic Neural Networks

  • Kim, Sang-Hee;Ahn, Hee-Wook;Wang, Hua O.
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.4
    • /
    • pp.236-241
    • /
    • 2007
  • This paper presents an adaptive controller using chaotic dynamic neural networks(CDNN) for nonlinear dynamic system. A new dynamic backpropagation learning method of the proposed chaotic dynamic neural networks is developed for efficient learning, and this learning method includes the convergence for improving the stability of chaotic neural networks. The proposed CDNN is applied to the system identification of chaotic system and the adaptive controller. The simulation results show good performances in the identification of Lorenz equation and the adaptive control of nonlinear system, since the CDNN has the fast learning characteristics and the robust adaptability to nonlinear dynamic system.

A study on the Adaptive Neural Controller with Chaotic Neural Networks (카오틱 신경망을 이용한 적응제어에 관한 연구)

  • Sang Hee Kim;Won Woo Park;Hee Wook Ahn
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.3
    • /
    • pp.41-48
    • /
    • 2003
  • This paper presents an indirect adaptive neuro controller using modified chaotic neural networks(MCNN) for nonlinear dynamic system. A modified chaotic neural networks model is presented for simplifying the traditional chaotic neural networks and enforcing dynamic characteristics. A new Dynamic Backpropagation learning method is also developed. The proposed MCNN paradigm is applied to the system identification of a MIMO system and the indirect adaptive neuro controller. The simulation results show good performances, since the MCNN has robust adaptability to nonlinear dynamic system.

  • PDF

A Study on a Rrecurrent Multilayer Feedforward Neural Network (자체반복구조를 갖는 다층신경망에 관한 연구)

  • Lee, Ji-Hong
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.10
    • /
    • pp.149-157
    • /
    • 1994
  • A method of applying a recurrent backpropagation network to identifying or modelling a dynamic system is proposed. After the recurrent backpropagation network having both the characteristicsof interpolative network and associative network is applied to XOR problem, a new model of recurrent backpropagation network is proposed and compared with the original recurrent backpropagation network by applying them to XOR problem. based on the observation thata function can be approximated with polynomials to arbitrary accuracy, the new model is developed so that it may generate higher-order terms in the internal states Moreover, it is shown that the new network is succesfully applied to recognizing noisy patterns of numbers.

  • PDF

A Learning Algorithm for a Recurrent Neural Network Base on Dual Extended Kalman Filter (두개의 Extended Kalman Filter를 이용한 Recurrent Neural Network 학습 알고리듬)

  • Song, Myung-Geun;Kim, Sang-Hee;Park, Won-Woo
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.349-351
    • /
    • 2004
  • The classical dynamic backpropagation learning algorithm has the problems of learning speed and the determine of learning parameter. The Extend Kalman Filter(EKF) is used effectively for a state estimation method for a non linear dynamic system. This paper presents a learning algorithm using Dual Extended Kalman Filter(DEKF) for Fully Recurrent Neural Network(FRNN). This DEKF learning algorithm gives the minimum variance estimate of the weights and the hidden outputs. The proposed DEKF learning algorithm is applied to the system identification of a nonlinear SISO system and compared with dynamic backpropagation learning algorithm.

  • PDF

Nonlinear System Modeling Based on Multi-Backpropagation Neural Network (다중 역전파 신경회로망을 이용한 비선형 시스템의 모델링)

  • Baeg, Jae-Huyk;Lee, Jung-Moon
    • Journal of Industrial Technology
    • /
    • v.16
    • /
    • pp.197-205
    • /
    • 1996
  • In this paper, we propose a new neural architecture. We synthesize the architecture from a combination of structures known as MRCCN (Multi-resolution Radial-basis Competitive and Cooperative Network) and BPN (Backpropagation Network). The proposed neural network is able to improve the learning speed of MRCCN and the mapping capability of BPN. The ability and effectiveness of identifying a ninlinear dynamic system using the proposed architecture will be demonstrated by computer simulation.

  • PDF

Learning an Artificial Neural Network Using Dynamic Particle Swarm Optimization-Backpropagation: Empirical Evaluation and Comparison

  • Devi, Swagatika;Jagadev, Alok Kumar;Patnaik, Srikanta
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.2
    • /
    • pp.123-131
    • /
    • 2015
  • Training neural networks is a complex task with great importance in the field of supervised learning. In the training process, a set of input-output patterns is repeated to an artificial neural network (ANN). From those patterns weights of all the interconnections between neurons are adjusted until the specified input yields the desired output. In this paper, a new hybrid algorithm is proposed for global optimization of connection weights in an ANN. Dynamic swarms are shown to converge rapidly during the initial stages of a global search, but around the global optimum, the search process becomes very slow. In contrast, the gradient descent method can achieve faster convergence speed around the global optimum, and at the same time, the convergence accuracy can be relatively high. Therefore, the proposed hybrid algorithm combines the dynamic particle swarm optimization (DPSO) algorithm with the backpropagation (BP) algorithm, also referred to as the DPSO-BP algorithm, to train the weights of an ANN. In this paper, we intend to show the superiority (time performance and quality of solution) of the proposed hybrid algorithm (DPSO-BP) over other more standard algorithms in neural network training. The algorithms are compared using two different datasets, and the results are simulated.

A neural network architecture for dynamic control of robot manipulators

  • Ryu, Yeon-Sik;Oh, Se-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1989.10a
    • /
    • pp.1113-1119
    • /
    • 1989
  • Neural network control has many innovative potentials for intelligent adaptive control. Among many, it promises real time adaption, robustness, fault tolerance, and self-learning which can be achieved with little or no system models. In this paper, a dynamic robot controller has been developed based on a backpropagation neural network. It gradually learns the robot's dynamic properties through repetitive movements being initially trained with a PD controller. Its control performance has been tested on a simulated PUMA 560 demonstrating fast learning and convergence.

  • PDF

Design of auto-tuning controller for Dynamic Systems using neural networks (신경회로망을 이용한 동적 시스템의 자기동조 제어기 설계)

  • Cho, Hyun-Seob;Oh, Myoung-Kwan
    • Proceedings of the KAIS Fall Conference
    • /
    • 2007.05a
    • /
    • pp.147-149
    • /
    • 2007
  • "Dynamic Neural Unit"(DNU) based upon the topology of a reverberating circuit in a neuronal pool of the central nervous system. In this thesis, we present a genetic DNU-control scheme for unknown nonlinear systems. Our methodis different from those using supervised learning algorithms, such as the backpropagation (BP) algorithm, that needs training information in each step. The contributions of this thesis are the new approach to constructing neural network architecture and its trainin.

  • PDF