On basins of attraction in attractor neural networks

ORAL

Abstract

We present an in-depth study of basin of attraction for patterns of $ \pm 1$ inscribed following Hebbian hypothesis {[1]} in a spin-glass like neural network. The aim is to investigate if basin of attraction being {\it non-zero} is a sufficient condition for the stability of an inscribed state when the necessary condition is that the inscribed state should be retrieved without any error. While this is true for Hopfield model {[1]}, we find that the following model is an exception in that as many as {\it p=N-1} stored patterns ({\it N} being the number of neurons in a fully connected network) can be retrieved without error while their basins of attraction consistently reduce in size as {\it p} increases and become zero around {\it p=0.8N}. The model proposes that the information that comes to be recorded in the brain is first orthogonalized (as in Gram-Schmidt orthogonalization) and then inscribed in synaptic weights. While the orthogonalized versions of input vectors with $ \pm 1$ components are stored in the model brain, the original vectors/patterns are retrieved exactly when checked for retrieval. Simulations are presented that give insight into the energy landscape in the space spanned by the network states.\\[4pt] [1] J.J. Hopfield,{\it PNAS} {\bf 79}, 2554(1982)

Authors

  • Suchitra Sampath

    Centre for Neural and Cognitive Sciences, University of Hyderabad, Hyderabad -500046. India

  • Vipin Srivastava

    Centre for Neural and Cognitive Sciences, University of Hyderabad, Hyderabad -500046. India