Recurrent Neural Networks Learn Simple Computations on Complex Time Series through Examples
ORAL
Abstract
A hallmark of artificial and biological neural networks is their ability to represent and generalize complex information. Artificial neural networks generate novel images of cats after seeing many pictures of cats. Conference attendees generate creative new future research directions after carefully listening to a talk. How do neural networks represent, manipulate, and extrapolate complex information given only examples? Previous methods such as FORCE learning have demonstrated the ability of a neural network to replicate complex, and even chaotic, patterns of observed outputs in response to specific patterns of driving inputs. Here, we explain how a neural network further learns the underlying computations performed on the observed outputs. Specifically, we demonstrate that a neural network trained to output slightly translated or rotated chaotic manifolds in response to small driving inputs can extrapolate this output to generate largely translated or rotated chaotic manifolds in response to large driving inputs. We conclude with an analytic understanding of how this extrapolation occurs, yielding design principles for creating neural networks that can generalize knowledge.
–
Presenters
-
Jason Kim
University of Pennsylvania
Authors
-
Jason Kim
University of Pennsylvania
-
Zhixin Lu
University of Pennsylvania
-
Danielle Bassett
University of Pennsylvania