Complexity Control Method of Chaos Dynamics in Recurrent Neural Networks

  • Sakai, Masao (Dept. of Electric and Communication Eng., Graduate School of Eng., Tohoku University) ;
  • Homma, Noriyasu (Dept. of Radiological Tech., College of Medical Science, Tohoku University) ;
  • Abe, Kenichi (Dept. of Electric and Communication Eng., Graduate School of Eng., Tohoku University)
  • Published : 2002.06.01

Abstract

This paper demonstrates that the largest Lyapunov exponent λ of recurrent neural networks can be controlled efficiently by a stochastic gradient method. An essential core of the proposed method is a novel stochastic approximate formulation of the Lyapunov exponent λ as a function of the network parameters such as connection weights and thresholds of neural activation functions. By a gradient method, a direct calculation to minimize a square error (λ - λ$\^$obj/)$^2$, where λ$\^$obj/ is a desired exponent value, needs gradients collection through time which are given by a recursive calculation from past to present values. The collection is computationally expensive and causes unstable control of the exponent for networks with chaotic dynamics because of chaotic instability. The stochastic formulation derived in this paper gives us an approximation of the gradients collection in a fashion without the recursive calculation. This approximation can realize not only a faster calculation of the gradient, but also stable control for chaotic dynamics. Due to the non-recursive calculation. without respect to the time evolutions, the running times of this approximation grow only about as N$^2$ compared to as N$\^$5/T that is of the direct calculation method. It is also shown by simulation studies that the approximation is a robust formulation for the network size and that proposed method can control the chaos dynamics in recurrent neural networks efficiently.

Keywords

References

  1. IEEE Trans. Neural Networks v.1 no.1 Identification and control of dynamical systems using neural networks K. S. Narendra;K.Parthasarathy https://doi.org/10.1109/72.80202
  2. Proc.International Joint Conference on Neural Networks(IJCNN-89) v.1 Generic constraints on underspecified target trajectories M. Jordan
  3. Proc. of IEEE ICNN 88 Capabilities of three-layered perceptrons B. Irie;S.Miyake
  4. Trans. SICE v.35 no.1 Complexity control method of dynamics in recurrent neural Networks(in Japanese) N. Homma;M. Sakai;K. Abe;H.Takeda https://doi.org/10.9746/sicetr1965.35.138
  5. International Journal of Bifurcation and Chaos v.2 no.4 Prediction of chaotic time series with neural networks and the issue of dynamic modeling J. C. Principe;A. Rathie;J. Kuo https://doi.org/10.1142/S0218127492000598
  6. Neural Information Processing System v.7 Dynamic modeling chaotic time series with neural nerworks J. C. Principe;J. Kuo;G. Tesauro(et al.)(eds.)
  7. Computational learning theory and neural learning systems, vol.4 of Making learning Systems Practical Dynamic modeling chaotic time series G.Deco;B. Schurmann
  8. Proc. of 14th IFAC World Congresss v.K Control method of the Lyapunov exponents for recurrent neural networks N. Homma;M. Sakai;K. Abe;H. Takeda
  9. Neural Networks v.13 Universal learning networks and its application to chaos control K. Hirasawa;X. Wang;J. Murata;J. Hu;C. Jin https://doi.org/10.1016/S0893-6080(99)00100-8
  10. Physica v.16D Determining Lyapunov exponents from a time series A. Wolf;J.B. Swift;H. L. Swinney;J. A. Vastano
  11. BIT v.7 Solving liner least squares problems by Gram-Schmidt orthogonalization A. Bjorck https://doi.org/10.1007/BF01934122
  12. Neural networks as applies to measurement and control(in Japanese) Y. Nishikawa;S. Kitamura;K. Abe
  13. Probability and statistics M. Ueda;Y. Okada;Y. Yoshitani
  14. Proc. of Artificial Life and Robotics v.2 no.2 Effect of complexity on learning ability of recurrent neural networks N. Homma;K. Kitagawa;K. Abe