APS Logo

Memory capacity of large structured neural networks

POSTER

Abstract

Neural networks have a large capacity for retaining information about input signals. Previous studies have demonstrated that networks need to be primarily feedforward to maximize information recall. However, only small networks were considered. Here, we use mean-field methods to characterize information storage in large networks composed of a small number of blocks, with the possibility of recurrent connections within and between blocks. We find that the optimal network structure undergoes sharp phase transitions with increasing average connection strength. These transitions involve the redistribution of neurons between sub-populations as well as the connections within and between sub-populations. It is typically advantageous to construct a network with a small input-recipient population and a larger downstream population. Regarding recurrent interactions within sub-populations, these have to be set either at the optimal level that maximizes the performance of this sub-population on its own or absent altogether. We also find that, contrary to the results for small networks, feedback connections between sub-populations can increase memory capacity. However, this is observed only in networks larger than a certain critical size, which can be as small as several hundred neurons, depending on the allowed degree of imbalance in the size of sub-populations. These results highlight previously under-appreciated functions of feedback and recurrent connections in neural circuits.

Publication: Memory capacity of large structured neural networks <br>by Chung-Yueh Lin, Alexander Kuczala, and Tatyana O. Sharpee, <br>submitted to PRL

Presenters

  • Tatyana O Sharpee

    Salk Inst

Authors

  • Chung-Yueh Lin

    University of California, San Diego

  • Tatyana O Sharpee

    Salk Inst

  • Alexander P Kuczala

    Salk Inst