APS Logo

Capacity of Group-invariant Linear Readouts from Equivariant Representations: How Many Objects can be Linearly Classified Under All Possible Views?

ORAL

Abstract

Equivariance has emerged as a desirable property of neural representations of objects subject to identity-preserving transformations that constitute a group, such as translations and rotations. However, the expressivity of a representation constrained by group equivariance is still not fully understood. We address this gap by providing a generalization of Cover's Function Counting Theorem that quantifies the number of linearly separable and group-invariant binary dichotomies that can be assigned to equivariant representations of objects. We find that the fraction of separable dichotomies is determined by the dimension of the space that is fixed by the group action. We show how this relation extends to operations such as convolutions, element-wise nonlinearities, and global and local pooling. While other operations do not change the fraction of separable dichotomies, local pooling decreases the fraction, despite being a highly nonlinear operation. Finally, we test our theory on intermediate representations of randomly initialized and fully trained convolutional neural networks and find perfect agreement. These results shed light on biological and artificial neural representations that are equivariant to input transformations.

Publication: arXiv (https://arxiv.org/abs/2110.07472).<br>Submitted to ICLR 2022 (https://iclr.cc).

Presenters

  • Matthew S Farrell

    Harvard University

Authors

  • Matthew S Farrell

    Harvard University

  • Blake Bordelon

    Harvard University

  • Shubhendu Trivedi

    Massachusetts Institute of Technology

  • Cengiz Pehlevan

    Harvard University