Complex Systems

Coarse-Coded Symbol Memories and Their Properties Download PDF

Ronald Rosenfeld
David S. Touretzky
Computer Science Department, Carnegie Mellon University,
Pittsburgh, PA 15213, USA

Abstract

Coarse-coded symbol memories have appeared in several neural network symbol processing models. They are static memories that use overlapping codes to store multiple items simultaneously. In order to determine how these models would scale, one must first have some understanding of the mathematics of coarse-coded representations. The general structure of coarse-coded symbol memories is defined, and their strengths and weaknesses are discussed. Memory schemes can be characterized by their memory size, symbol-set size, and capacity. We drive mathematical relationships between these parameters for various memory schemes, using both analysis and numerical methods. We find a simple linear relationship between the resources allocated to the system and the capacity they yield. The predicted capacity of one of the schemes is compared with actual measurements of the coarse-coded working memory of DCPS, Touretzky and Hinton's distributed connectionist production system. Finally we provide a heuristic algorithm for generating receptive fields which is efficient and produces good results in practice.