APS Logo

Gated Recurrent Neural Networks 1: marginal stability and line attractors

ORAL

Abstract

Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) for processing sequential data. Prior theoretical work in understanding the properties of RNNs has focused on models with additive interactions. However, real neurons can have gating – i.e. multiplicative – interactions, and gating is also a central feature of the best performing RNNs in machine learning. Here, we use DMFT and random matrix theory to study the consequences of gating in RNNs. Specifically, we show that gating robustly produces marginal stability and line attractors -- important mechanisms for biologically-relevant computations requiring long memory. Prior models for line-attractors need fine-tuning and their relevance has been debated. Our results suggest gating as a new paradigm for achieving line-attractor dynamics without fine-tuning. This ability might also underlie the superior ability of gated ML RNNs to learn tasks with long-time correlations, in line with empirical studies of trained RNNs.

Presenters

  • Kamesh Krishnamurthy

    Princeton University, Dept. of Physics and Princeton Neuroscience Institute, Princeton University

Authors

  • Kamesh Krishnamurthy

    Princeton University, Dept. of Physics and Princeton Neuroscience Institute, Princeton University

  • Tankut Can

    Graduate Center, CUNY, Initiative for the Theoretical Sciences, The Graduate Center, CUNY, USA, Initiative for the Theoretical Sciences, The Graduate Center, CUNY

  • David J Schwab

    Graduate Center, CUNY, Initiative for the Theoretical Sciences, The Graduate Center, CUNY