APS Logo

Neural subspaces and the limits of mean-field theory

ORAL

Abstract

There is a widely held intuition that the dynamics of neural networks are dominated by projections of the joint activity onto low dimensional subspaces. We give this a statistical physics formulation, in which the interactions among neurons are mediated by these projections. The models can be seen as maximum entropy models that match the mean activity of individual neurons and the (co)variance of activity along projections, or more generally the full distribution along projections. If the number of projections is small and the number of neurons is large, models should be solvable in mean-field theory. We show that applying this framework to real data (from the hippocampus and the cortex) leads to difficulties, and naive mean-field models break down because the data drive parameters close to a first order transition; the resulting models also fail to describe higher-order features of the data. This problem disappears if one chooses projections at random, but these models are almost noninteracting, with entropy close to that of independent neurons. Models that match the distribution of informative projections are more successful, but only if they are poised near a critical point. Networks with the same mean activities but weaker correlations are described by models further from the critical point, showing that near-criticality is a quantitative property of these systems. We explore optimal choices of the projections.

Presenters

  • Francesca Mignacco

    The Graduate Center, City University of New York

Authors

  • Francesca Mignacco

    The Graduate Center, City University of New York

  • Christopher W Lynn

    Yale University

  • William S Bialek

    Princeton University