Complex Systems

Learning by CHIR without Storing Internal Representations Download PDF

Dimitry Nabutovsky
Tal Grossman
Eytan Domany
Department of Electronics, Weizmann Institute of Science,
Rehovot 76100 Israel

Abstract

A new learning algorithm for feedforward networks, learning by choice of internal representations (CHIR), was recently introduced [1,2]. Whereas many algorithms reduce the learning process to minimizing a cost function over the weights, our method treats the internal representations as the fundamental entities to be determined. The algorithm applied a search procedure in the space of internal representations, together with cooperative adaptation of the weights (e.g., by using perceptron learning). Tentative guesses of the internal representations, however, had to be stored in memory. Here we present a new version, CHIR2, which eliminates the need to store internal representations and at the same time is faster than the original algorithm. We first describe a basic version of CHIR2, tailored for networks with a single output and one hidden layer. We tested it on three problems---contiguity, symmetry, and parity---and compared its performance with backpropagation. For all these problems our algorithm is 30--100 times faster than backpropagation, and, most significantly, learning time increases more slowly with system size. Next, we show how to modify the algorithm for networks with many output units and more than one hidden layer. This version is tested on the combined parity + symmetry problem and on the random associations task. A third modification of the new algorithm, suitable for networks with binary weights (all weights and thresholds are equal to 1), is also described, and tests of its performance on the parity and the random teacher problems are reported.