APS Logo

Quantifying success and failure in simple models of large neural populations

ORAL

Abstract

In statistical physics we routinely study models for collective behaviors that are simpler than the underlying microscopic mechanisms. In biological systems, one systematic implementation of this idea is the maximum entropy method, where we match some features of the data but otherwise the model has as little structure as possible. To understand whether this approach “works", it would be attractive to have a testing ground where we could see the same model succeed or fail to describe different but related systems. Recent experiments monitor the activity of 1000+ cells in the mouse hippocampus as the animal runs through a virtual environment. The scale of these data allows us to construct models for many different subsets of neurons drawn out of the whole population. We test many predictions of these models, and find that quantitative agreement with experiment is best when the group of cells is spatially contiguous; if we draw the same number of cells at random from large regions, the agreement gets systematically worse. Strikingly, the different predictions fail in an ordered way, so we can rank the different collective behaviors of the network activity by the degree of difficulty in getting them right. This serves to make precise what it means for these models to work.

Presenters

  • Leenoy Meshulam

    Massachusetts Institute of Technology MIT

Authors

  • Leenoy Meshulam

    Massachusetts Institute of Technology MIT

  • Jeffrey Gauthier

    Swarthmore College

  • Carlos Brody

    Princeton Neuroscience Institute, Princeton University, Princeton University

  • David Tank

    Princeton Neuroscience Institute, Princeton University, Princeton University

  • William S Bialek

    princeton university, Department of Physics, Princeton University, Princeton University, Physics, Princeton University