Neuron Operations Compatible with the Physics of a Super-Turing Computational Model
ORAL
Abstract
A. Newell developed an early cognition theory which he completely described in a 1990 book. He realized that brain modeling needed to have an uncountably infinite state space that is unavailable with the countably unbounded space of a Universal Turing Machine (UTM). H. Siegelmann proved that three recurrent neural networks supply a state space beyond that of a UTM. One of them embodies the constraints of the physical universe. It assumes rational weights and stochastic signals. Correspondingly, the universe supplies quantized charges and noise sources. Neurons trigger on finite number of discrete charges or neurotransmitter packets. Through those, neurons have rational weights. They also work in a noisy environment. Consistent with the BPP/log* complexity level proven by Siegelmann, biological neurons can compute in an uncountably infinite state space. Recent noise-enhanced digital (rational) neural network simulations have shown the best noise magnitudes that allow them to mimic chaos. This showed consistency with super-Turing operation. Neurons appear to have found a super-Turing complexity level that makes our brains more creative and versatile than computers. Low-power micro-controllers can simulate the analog input and spiking output(s) of a neuron. Computer simulations of small networks using Hebbian learning are in progress.
–
Publication: Redd, Emmett, Steven Senger, and Tayo Obafemi-Ajayi. "Noise optimizes super-Turing computation in recurrent neural networks." Physical Review Research 3, no. 1 (2021): 013120.<br>Redd, Emmett, and Tayo Obafemi-Ajayi. "Noise Quality and Super-Turing Computation in Recurrent Neural Networks." In International Conference on Artificial Neural Networks, pp. 469-478. Springer, Cham, 2021.
Presenters
-
Emmett R Redd
Missouri State University
Authors
-
Emmett R Redd
Missouri State University