Improvement of Learning Capability with Combination of the Generalized Cascade Correlation and Generalized Recurrent Cascade Correlation Algorithms

일반화된 캐스케이드 코릴레이션 알고리즘과 일반화된 순환 캐스케이드 코릴레이션 알고리즘의 결합을 통한 학습 능력 향상

  • Published : 2009.02.28


This paper presents a combination of the generalized Cascade Correlation and generalized Recurrent Cascade Correlation learning algorithms. The new network will be able to grow with vertical or horizontal direction and with recurrent or without recurrent units for the quick solution of the pattern classification problem. The proposed algorithm was tested learning capability with the sigmoidal activation function and hyperbolic tangent activation function on the contact lens and balance scale standard benchmark problems. And results are compared with those obtained with Cascade Correlation and Recurrent Cascade Correlation algorithms. By the learning the new network was composed with the minimal number of the created hidden units and shows quick learning speed. Consequently it will be able to improve a learning capability.


(Recurrent) Cascade Correlation Algorithm;Activation Function;Sigmoid Function;Hyperbolic Tangent Function


  1. F. Dandurand, V. Berthiaume, and T. R. Shultz, "A systematic comparison of flat and standard cascade-correlation using a student-teacher network approximation task," Connection Science, Vol.19, Issue3, pp.223-244, 2007.
  2. S. E. Fahlman and C. Lebiere, "The cascadecorrelation learning architecture," S. Touretzky, Editor, Advances in Neural Information Processing Systems2, Morgan Kaufmann, 1990.
  3. S. E. Fahlman, "The Recurrent Cascade- Correlation Architecture," Advances in Neural Information Process- ing Systems3, Morgan Kaufmann, pp.190-198, 1991.
  4. X. Z. Gao, X. Wang, and S. J. Ovaska, "A novel hybrid optimization method with application in Cascade-Correlation neural network training," Proceedings, 8th International Conference on Hybrid Intelligent Systems, Article, No.4626728, pp.793-800, 2008.
  5. B. Hammer, A. Micheli, and A. Sperduti, "Universal approximation capability of cascade correlation for structures," Neural Computation, Vol.17, Issue5, pp.1109-1159, 2005.
  6. T. D. Le, T. Komeda, and M. Takagi, "Knowledge-based recurrent neural networks in reinforcement learning," Proceedings of the 11th IASTED International Conference on Artificial Intelligence and Soft Computing, pp.169-174, 2007.
  7. L. Prechelt, PROBEN1-A Set of Neural Network Benchmark Problems and Benchmarking Rules, Technical Report, University of Karlsruhe, 1999(9).
  8. Stuttgart Neural Network Simulator(SNNS), User Manual, Version 4.0, Institute for Parallel and Distributed High Performance Systems (IPVR), University of Stuttgart, 1998.
  9. 이상화, "케스케이드 코릴레이션 알고리즘의 일반화와 새로운 활성화 함수를 사용한 실험", 정보과학회논문지, 제25권, 제7호, B, 1998.
  10. 이상화, 송해상, "순환 케스케이드 코릴레이션알고리즘의 일반화와 새로운 활성화함수를 사용한 모스 신호 실험", 한국지능정보시스템학회논문지, 제10권, 제2호, pp.53-63, 2004(11).