## Comparison of Self-Organization and Optimization in Evolution and Neural Network Models

**Elgar E. Pichler**

**James D. Keeler***Permanent address: Microelectronics and Computer Technology Corporation, 3500 West Balcones Center Drive, Austin, TX 78759, USA.*

**John Ross***Chemistry Department, Stanford University,**Stanford, CA 94305, USA*

#### Abstract

The Eigen model of macromolecular evolution is compared with the Little--Hopfield neural network model with discrete-state neurons. Similarities of these systems are shown by their description as Ising spin models. Both of the systems show self-organizing behavior in certain parameter regions. Energies for the single states can be defined in such a way that self-organization results in a localization around states with small energies. Therefore, both models can be used as optimization algorithms for complex combinatorial problems, and they can be interpreted as special cases of a more general optimization algorithm. The self-organization process depends on nonnegative transition frequency matrices, which describe transitions of the systems from one state to another. The transition frequencies are functions of a Gaussian noise source, which models an internal temperature. A standard deviation for the distribution of the noise is necessary in both cases for an exhaustive search of the state space. However, the neural network can find local minima of the global energy function at , whereas the evolution model cannot do this because noise has to be present for migration to new states in the evolutionary run. Maximal values of , , above which no self-organization occurs, are given for both systems. In the evolution model there is a sharp decrease of with the size of the system. Self-organization is possible for the evolution model up to a certain system size even at noise levels which are higher than the critical noise level for the neural network model. External requirements determine effective noise thresholds which are lower than the critical noise levels. Both systems relax to stationary states. Numerical simulations show that the dependence of the stationary state and the relaxation times on system parameters is different in the two models, although the same energy function is used for optimization, and that the probability of obtaining good solutions to optimization problems may be higher during the relaxation process than at the stationary state.