APS Logo

Scalable maximally informative dimensions analysis of deep neural networks

ORAL

Abstract

Maximally informative dimensions (MID) is a technique in neuroscience used to analyze neural responses to natural stimuli. It assumes that neurons are only sensitive to a low-dimensional subspace within the high-dimensional stimulus space and extracts those relevant dimensions by maximizing the mutual information between the neural response and stimulus projections. Despite its advantages, MID suffers from scalability issues of the optimization. As such, in practice, no more than a handful of dimensions can be found. Here, we present a method based on variational lower bounds of mutual information that allows for the efficient extraction of large number of informative dimensions. We demonstrate this method by studying a deep neural network trained on CIFAR-10, and suggest possible applications towards the information-theoretic view of deep learning as well as a new, principled method of visualizing multiple different facets of a neuron.

Presenters

  • Jimmy Kim

    Northwestern University

Authors

  • Jimmy Kim

    Northwestern University

  • David J. Schwab

    Institute for Theoretical Science, CUNY Graduate Center, Initiative for the Theoretical Sciences, The Graduate Center, City University of New York, City University of New York