APS Logo

Inferring structure-function relationships in neural networks from geometric features of their error landscapes

ORAL

Abstract

Neural computation in biological and artificial networks relies on nonlinear synaptic integration. The synaptic connectivity between neurons in these networks is a critical determinant of overall network function. However, what features in connectivity are specifically required to generate measurable network-level computations are largely unknown. Here we introduce a geometric framework for calculating and analyzing connectivity constraints in nonlinear recurrent neural networks of rectified linear units. We focus on network-level functions defined by steady-state responses to a finite set of input conditions. By analytically determining how well any network approximates the responses, we show that error landscapes typically have degenerate global minima with several flat and semi-flat dimensions. The latter emerges from the rectifying nonlinearities. Further, at increasing noise or error levels, topological transitions occur where the space of admissible solutions changes suddenly and drastically. This is reminiscent of sudden changes associated with phase transitions in physical systems. These results allow us to develop a formalism for extracting rigorous insights into neural network structure and function from the geometries and topological transitions of error surfaces.

Presenters

  • James Fitzgerald

    Janelia Research Campus

Authors

  • James Fitzgerald

    Janelia Research Campus

  • Tirthabir Biswas

    Janelia Research Campus