A New Recurrent Neural Network Architecture for Pattern Recognition and Its Convergence Results

  • 발행 : 1996.03.01

초록

In this paper, we propose a new type of recurrent neural network architecture in which each output unit is connected with itself and fully-connected with other output units and all hidden units. The proposed recurrent network differs from Jordan's and Elman's recurrent networks in view of functions and architectures because it was originally extended from the multilayer feedforward neural network for improving the discrimination and generalization power. We also prove the convergence property of learning algorithm of the proposed recurrent neural network and analyze the performance of the proposed recurrent neural network by performing recognition experiments with the totally unconstrained handwritten numeral database of Concordia University of Canada. Experimental results confirmed that the proposed recurrent neural network improves the discrimination and generalization power in recognizing spatial patterns.

키워드