The dynamics of recurrent neural networks throughout learning at the edge of chaos.
ORAL
Abstract
Recurrent neural networks (RNNs) adaptively learn representations of the natural world by strengthening and weakening the interactions between neurons. Such adaptation depends heavily on the network architecture, whereby more excitable networks -termed at criticality- exhibit greater performance. However, the precise role of intrinsic excitability for successful learning remains largely unknown. Here we demonstrate that intrinsic excitability enables RNNs to form more stable and robust internal representations whose geometry converges throughout learning. Specifically, we adaptively train RNNs with different intrinsic excitability to learn a chaotic attractor, and quantify the learning process at each training step. We find that the unstable Lyapunov exponents of RNNs near criticality more quickly and robustly converge to those of the true attractor. Further, we find that the geometry of the embedded manifold -as measured by the Hausdorff distance- converges to the true attractor manifold for RNNs near criticality, and does not converge for RNNs far from criticality. Taken together, our results demonstrate that intrinsic excitability enables the converging formation of stable representations, thereby providing insight into how complex representations are learned adaptively.
–
Presenters
-
Tala Fakhoury
Columbia University
Authors
-
Tala Fakhoury
Columbia University