Complex Systems

Training Recurrent Neural Networks via Trajectory Modification Download PDF

D. Saad
Current address: Department of Physics, University of Edinburgh, J. C. Maxwell Building, Mayfield Road, Edinburgh EH9 3JZ UK.
Faculty of Engineering, Tel Aviv University, 69978, Israel

Abstract

Trajectory modification of recurrent neural networks is a training algorithm that modifies both the network representations in each time step and the common weight matrix. The present algorithm is a generalization of the energy minimization formalism for training feed-forward networks via modifications of the internal representations. In a previous paper we showed that the same formalism leads to the back-propagation algorithm for continuous neurons and to a generalization of the CHIR training procedure for binary neurons. The TRAM algorithm adopts a similar approach for training recurrent neural networks with stable endpoints, whereby the network representations in each time step may be modified in parallel to the weight matrix. In carrying out the analysis, consistency with other training algorithms is demonstrated when a continuous-valued system is considered, while the TRAM learning procedure, representing an entirely different concept, is obtained for the discrete case. Computer simulations carried out for the restricted cases of parity and teacher-net problems show rapid convergence of the algorithm.