Stephen M. Omohundro
Department of Computer Science and
Center for Complex Systems Research,
University of Illinois at Urbana-Champaign,
508 South Sixth Street, Champaign, IL 61820, USA
Neural network models are currently being considered for a wide variety of important computational tasks, particularly those involving imprecise inputs. This paper suggests alternative algorithms for many of these tasks which appear to have much better average performance than standard neural network models. For example, these algorithms could provide a billionfold speed increase in an implementation of an associative memory with roughly human capacity. They are based on hierarchical data structures from computational geometry and use a more direct representation of information than neural networks. As in many neural network models, the modules proposed here "learn" and can adapt themselves to different statistical distribution of inputs. They can be applied to problems involving classification, clustering, dimension reduction and the learning of nonlinear mappings. They can be implemented efficiently on both serial and parallel computers, and can potentially be used in practical applications ranging from speech and optical character recognition, to robot manipulator control, data prediction, and document retreival.