How recurrent neural networks infer dynamical models from data for prediction, inference, and source separation
ORAL · Invited
Abstract
One of the fantastic properties of neural systems is their ability to flexibly understand and interact with the dynamical world without hard-coding prior models. Given the immense dimensionality and complexity associated with studying any specific neural network, is there a theoretical framework that can explain the cognitive capacity of general dynamical neurons? In this talk, I introduce a framework that allows randomly constructed neural networks to learn and manipulate dynamical models from data via generalized synchronization. By extending the concept of synchronization from frequencies and phases to attractor manifolds, we explain the underlying mechanism with which recurrent neural networks learn and manipulate dynamical models from data. We apply this framework to understand how recurrent neural networks 1) simultaneously learn multiple models, 2) infer unseen variables from partially measured dynamics, 3) separate mixed signals from multiple sources, 4) construct continuous representations from discrete examples, and 5) infer global dynamics from local examples. Together, our results provide a simple but powerful mechanism by which dynamical neural networks can learn internal dynamical representations of the complex dynamical world, enabling the principled study and better designs of artificial intelligence.
–
Presenters
-
Zhixin Lu
Allen Institute for Brain Science
Authors
-
Zhixin Lu
Allen Institute for Brain Science