APS Logo

Robust Memory Manifolds in Neural Networks

ORAL

Abstract

The ability to store continuous variables in the state of a biological system (e.g. a neural network) is critical for many behaviors. Most models for implementing such a memory manifold require hand-crafted symmetries in the interactions or precise fine-tuning of parameters. We present a general principle that we refer to as frozen stabilization (FS), which allows a family of neural networks to self-organize to a dynamically critical state exhibiting multiple memory manifolds without parameter fine-tuning or symmetries. We find that FS gives rise to networks with a true continuum of fixed points that can function as precise general purpose neural integrators. The network attractor has a complex global geometry, consisting of a union of multiple uncorrelated continuous attractor "maps". Even on a single map, there is a broad range of relaxation timescales which vary along the attractor. Moreover, FS easily produces robust, low-dimensional memory manifolds in small systems with as few as two neurons. This bears directly upon recent experiments uncovering continuous attractor dynamics in small networks like the fly brain. In summary, frozen stabilization leads to robust continuous attractors and a wide range of timescales in recurrent neural networks, without parameter fine-tuning or special symmetries, and without the need for learning. Such memory manifolds could be useful to model biological implementations of integrators or cognitive maps.

Publication: Tankut Can, Kamesh Krishnamurthy, Emergence of Memory Manifolds, arXiv:2109.03879 (2021)

Presenters

  • Tankut U Can

    Institute for Advanced Study, The School of Natural Sciences at the Institute for Advanced Study at Princeton

Authors

  • Tankut U Can

    Institute for Advanced Study, The School of Natural Sciences at the Institute for Advanced Study at Princeton

  • Kamesh Krishnamurthy

    Princeton University