APS Logo

Understanding Self-Assembly Behavior with Self-Supervised Learning

ORAL

Abstract

Recently, deep learning models trained on enormous amounts of data using simple language modeling tasks have shown great promise when applied to new problems, including the generation of novel text. These results have spurred the proliferation of attention mechanisms, which are particularly useful for their power and the ability to inspect model behavior by viewing attention weights for a given input. In this work, we show several permutation- and rotation-equivariant neural network architectures using attention mechanisms to solve self-supervised tasks on point clouds. We show how the representations learned by these networks can be applied to understand the structural evolution of systems of self-assembling particles. Equivariant architectures such as those shown here can help apply the power of deep learning to new condensed matter systems, opening the door to powerful ways to analyze and even generate novel local environments within ordered structures.

Publication: https://arxiv.org/abs/2110.02393 ; also a more tailored preprint that is under review as of submission

Presenters

  • Matthew Spellings

    Vector Institute for Artificial Intelligence

Authors

  • Matthew Spellings

    Vector Institute for Artificial Intelligence

  • Maya Martirossyan

    Cornell University

  • Julia Dshemuchadse

    Cornell University