APS Logo

Dynamically learning neural interactions that improve information

ORAL

Abstract

In the brain, the information about an external signal is encoded as the response of a set of interacting neurons. One expects that the underlying neural interactions are not random, but are the result of the statistical features of the expected stimuli and neural responses. By estimating the mutual information between the stimulus and the response of a set of interacting neurons we show that, indeed, randomly connected neurons do not provide improvement in mutual information as compared to non-interacting neurons, on average. This is because the interactions that positively and negatively affect the mutual information are statistically equally distributed. To achieve information increase the network has to search the space of possible interactions. This search comes at an energetic cost. To demonstrate a possible unsupervised search mechanism, we develop a model of orientation-selective neurons where the interactions between the neurons are learned dynamically from the responses to external stimuli. Using the temporal history of the neural correlations, the network adjusts its interactions dynamically leading to higher mutual information between stimuli and neural responses.

Presenters

  • Vijay Singh

    Physics, North Carolina A&T State University

Authors

  • Martin Tchernookov

    Physics, University of Wisconsin, Whitewater

  • Vijay Singh

    Physics, North Carolina A&T State University