Complex Systems

The Choice Problem: Neural Network Learning, Generalization, and Geometry Download PDF

Alan J. Katz
Dean R. Collins
Marek Lugowski
Texas Instruments Incorporated,
Central Research Laboratories, Dallas, Texas 75265, USA

Abstract

We study the learning and generalization capacity of layered feed-forward neural networks in the context of mappings of arbitrarily long bit strings into one of three ordered outputs. Many signal-processing applications reduce to this problem. We use the back-propagation learning algorithm to train the network. We describe these mappings in terms of collections of linear partitions of the input space and the state spaces of the hidden layers of neurons. This description accounts for the properties of the mappings and suggests that learning and generalization are enhanced by training with boundary points of the input space. Several examples are included. We close with implications for layered feed-forward networks in general.