Driving the most marginally stable variables as a paradigm for learning, memory, and optimization
ORAL
Abstract
It is shown that an extremally driven dynamics, which selectively moves in each update only the most unstable variables without prescribing any specific target state, can achieve a highly correlated state with long-term memory that allows to accomplish complex artificial intelligence tasks such solving hard optimization problems, better than current machine learning methods [1]. As an example, we discuss the approximation of spin-glass ground states, an NP-hard combinatorial problem, with the Extremal Optimization heuristic (EO) [2]. Inspired by the extremally driven dynamics in the Bak-Sneppen model of Self-Organized Criticality, the individual spin variables are ordered according to their energetic cost to the overall Hamiltonian and selectively updated with a bias towards moving poorly adapted variables. Simulations of EO on spin glass instances show that such a ranking of variables achieves broadly distributed return times between activation of variables, thus, preserving a hierarchy of memories of well-adapted variables within a configuration, as well as an efficient exploration of complex ("glassy") energy landscapes, exemplified by a stretched-exponential autocorrelation function.
–
Publication: [1] Inability of a graph neural network heuristic to outperform greedy algorithms in solving combinatorial optimization problems like Max-Cut, Stefan Boettcher, https://arxiv.org/abs/2210.00623<br>[2] Optimization with Extremal Dynamics, Stefan Boettcher and Allon G. Percus, Phys. Rev. Lett. 86, 5211 (2001), https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.86.5211.
Presenters
-
Stefan Boettcher
Emory University
Authors
-
Stefan Boettcher
Emory University