APS Logo

The Manifold Packing Loss Function: A Physics-Inspired Approach to Contrastive Self-Supervised Learning

ORAL

Abstract

Contrastive self-supervised learning is a powerful machine learning framework that leverages unlabeled data to learn low-dimensional representations by defining loss functions that distinguish between positive (similar) and negative (dissimilar) pairs. From a geometric perspective, the neural representations of similar and dissimilar samples form distinct manifolds in the representation space. Distinguishing these manifolds by separating them resembles the packing problem in physics. In this work, we enclose these neural manifolds within ellipsoids and propose a novel physics-inspired loss function based on physical potentials. This loss function penalizes overlap between manifolds, minimizes manifold size, and aligns them when overlap occurs. By applying this loss function during self-supervised pretraining, we demonstrate the efficacy of our approach in downstream classification tasks, using a linear classifier on top of the pretrained deep network. These results highlight the potential of our framework to enhance the performance of self-supervised learning.

Presenters

  • Guanming Zhang

    New York University (NYU)

Authors

  • Guanming Zhang

    New York University (NYU)

  • David J Heeger

    New York University

  • Stefano Martiniani

    New York University (NYU)