A finite-size ergodic theory of stable chaos for quantifying information processing in balanced state networks of spiking neurons
ORAL
Abstract
The stability of a dynamics constrains its ability to process information, a notion intended to be captured by the ergodic theory of chaos and one likely to be important for neuroscience. Asynchronous, irregular network activity can be produced by models in which excitatory and inhibitory inputs are balanced [1]. For negative and sharply pulsed interactions, these networks turn out to be stable. The coexistence of aperiodic activity and stability is called stable chaos[2]. This stability to perturbations only exists up to some finite average strength beyond which they are unstable [3]. This finite-size instability produces entropy not captured by conventional ergodic theory. We derive and use the probability of divergence as a function of perturbation strength to give an expression for a finite-sized analogue of the Kolmolgorov-Sinai (KS) entropy that scales with the perturbation strength, and thus deviates from the conventional KS entropy value of 0. This work provides a foundation for understanding the information processing capacity of networks in the fast synapse, fast action potential onset, and inhibition-dominated regime.\\[4pt] [1] van Vreeswijk, C. \& Sompolinsky, Science 274:1724-1726 (1996).\\[0pt] [2] Politi, A. et al. EPL 22,8 (1993).\\[0pt] [3] Monteforte, M. \& Wolf, F., PRX 2,1 (2012).
–
Authors
-
Maximilian Puelma Touzel
Max Planck Institute for Dynamics and Self-organization
-
Michael Monteforte
Max Planck Institute for Dynamics and Self-organization
-
Fred Wolf
Max Planck Institute for Dynamics and Self-Organization, Max Planck Institute for Dynamics and Self-organization