APS Logo

Teaching Recurrent Neural Networks to Infer Global Temporal Structure from Local Examples

ORAL

Abstract

The ability to store and manipulate information is a hallmark of computational systems. Whereas computers are carefully engineered to represent and perform mathematical operations on structured data, neurobiological systems adapt to perform analogous functions without needing to be explicitly engineered. However, precisely how neural systems learn to modify these representations remains far from understood. Here we demonstrate that a recurrent neural network (RNN) can learn to modify and infer its representation of complex information using only examples, and we explain the associated learning mechanism with new theory. Specifically, we train an RNN with examples of translated, linearly transformed, or pre-bifurcated time series from a chaotic Lorenz system, and find that it learns to continuously interpolate and extrapolate the translation, transformation, and bifurcation of this representation far beyond the training data by changing the control signal. Further, we demonstrate that RNNs can infer the global bifurcation structure of normal forms and period doubling routes to chaos, and extrapolate non-dynamical, kinematic trajectories.

Presenters

  • Jason Kim

    University of Pennsylvania

Authors

  • Jason Kim

    University of Pennsylvania

  • Zhixin Lu

    University of Pennsylvania

  • Erfan Nozari

    Mechanical Engineering, University of California, Riverside

  • George Pappas

    University of Pennsylvania

  • Danielle Bassett

    University of Pennsylvania, Department of Bioengineering, University of Pennsylvania, Physics, University of Pennsylvania