Complex Systems
Current Issue

Volume 31, Number 4 (2022)


Characterization of Single Length Cycle Two-Attractor Cellular Automata Using Next-State Rule Minterm Transition Diagram Download PDF
Suvadip Hazra and Mamata Dalui

Cellular automata (CAs) are simple mathematical models that are effectively being used to analyze and understand the behavior of complex systems. Researchers from a wide range of fields are interested in CAs due to their potential for representing a variety of physical, natural and real-world phenomena. Three-neighborhood one-dimensional CAs, a special class of CAs, have been utilized to develop various applications in the field of very large-scale integration (VLSI) design, error-correcting codes, test pattern generation, cryptography and others. A thorough analysis of a three-neighborhood cellular automaton (CA) with two states per cell is presented in this paper. A graph-based tool called the next-state rule minterm transition diagram (NSRTD) is presented for analyzing the state transition behavior of CAs with fixed points. A linear time mechanism has been proposed for synthesizing a special class of irreversible CAs referred to as single length cycle two-attractor CAs (TACAs), having only two fixed points.

Keywords: cellular automata; TACA; NSRTD

Cite this publication as:
S. Hazra and M. Dalui, “Characterization of Single Length Cycle Two-Attractor Cellular Automata Using Next-State Rule Minterm Transition Diagram,” Complex Systems, 31(4), 2022 pp. 363–388.
https://doi.org/10.25088/ComplexSystems.31.4.363


Combining Algorithmic Information Dynamics Concepts and Machine Learning for Electroencephalography Analysis: What Can We Get? Download PDF
Victor Iapascurta

Electroencephalography (EEG) as an example of electrophysiological monitoring methods has a rather long history of successful application for the diagnosis and treatment of diseases, and this success would not have been possible without effective methods of mathematical, and more recently, computer analysis. Most of these methods are based on statistics. Among the methods of EEG analysis, there is a group of methods that use different versions of Shannon’s entropy estimation as a “main component” and that do not differ significantly from traditional statistical approaches. Despite the external similarity, another approach is to use the Kolmogorov–Chaitin definition of complexity and the concepts of algorithmic information dynamics. The algorithmic dynamics toolbox includes techniques (e.g., block decomposition method) that appear to be applicable to EEG analysis. The current paper is an attempt to use the block decomposition method along with the recent addition to the management of EEG data provided by machine learning, with the ultimate goal of making this data more useful to researchers and medical practitioners.

Keywords: biomedical signal processing; electroencephalography; algorithmic complexity; algorithmic information dynamics; block decomposition method; machine learning; deep learning  

Cite this publication as:
V. Iapascurta, “Combining Algorithmic Information Dynamics Concepts and Machine Learning for Electroencephalography Analysis: What Can We Get?,” Complex Systems, 31(4), 2022 pp. 389–413.
https://doi.org/10.25088/ComplexSystems.31.4.389


Operator Representation and Class Transitions in Elementary Cellular Automata Download PDF
Muhamet Ibrahimi, Arda Güçlü, Naide Jahangirov, Mecit Yaman, Oguz Gülseren and Seymur Jahangirov

We exploit the mirror and complementary symmetries of elementary cellular automata (ECAs) to rewrite their rules in terms of logical operators. The operator representation based on these fundamental symmetries enables us to construct a periodic table of ECAs that maps all unique rules in clusters of similar asymptotic behavior. We also expand the elementary cellular automaton (ECA) dynamics by introducing a parameter that scales the pace with which operators iterate the system. While tuning this parameter continuously, further emergent behavior in ECAs is unveiled as several rules undergo multiple phase transitions between periodic, chaotic and complex (class 4) behavior. This extension provides an environment for studying class transitions and complex behavior in ECAs. Moreover, the emergence of class 4 structures can potentially enlarge the capacity of many ECA rules for universal computation.

Keywords: elementary cellular automata; classes of cellular automata; deterministic transition; logistic map; Cantor set

Cite this publication as:
M. Ibrahimi, A. Güçlü, N. Jahangirov, M. Yaman, O. Gülseren and S. Jahangirov, “Operator Representation and Class Transitions in Elementary Cellular Automata,” Complex Systems, 31(4), 2022 pp. 415–432.
https://doi.org/10.25088/ComplexSystems.31.4.415


Formalizing the Use of the Activation Function in Neural Inference Download PDF
Dalton A. R. Sakthivadivel

We investigate how the activation function can be used to describe neural firing in an abstract way, and in turn, why it works well in artificial neural networks. We discuss how a spike in a biological neuron belongs to a particular universality class of phase transitions in statistical physics. We then show that the artificial neuron is, mathematically, a mean-field model of biological neural membrane dynamics, which arises from modeling spiking as a phase transition. This allows us to treat selective neural firing in an abstract way and formalize the role of the activation function in perceptron learning. The resultant statistical physical model allows us to recover the expressions for some known activation functions as various special cases. Along with deriving this model and specifying the analogous neural case, we analyze the phase transition to understand the physics of neural network learning. Together, it is shown that there is not only a biological meaning but a physical justification for the emergence and performance of typical activation functions; implications for neural learning and inference are also discussed.

Keywords: neural networks; Ising model; phase transitions; perceptrons

Cite this publication as:
D. A. R. Sakthivadivel, “Formalizing the Use of the Activation Function in Neural Inference,” Complex Systems, 31(4), 2022 pp. 433–449.
https://doi.org/10.25088/ComplexSystems.31.4.433

Current issue Subscribe

Complex Systems ISSN 0891-2513
© 1987–2023
Complex Systems Publication, Inc.

Published four times annually
Complex Systems Publications, Inc.
P.O. Box 6149
Champaign, IL 61826 USA

info@complex-systems.com

Subscribe Now

Contribute

Join the leading edge of complex systems research today. There are no publication charges. Authors are provided with 25 reprints of papers published in Complex Systems. Papers may be submitted via the web or email. Find out more »