Self-Supervised Learning for Material Property Prediction
ORAL
Abstract
Machine learning (ML) models have successfully been used to predict material properties. However, the large labeled datasets required for training accurate ML models are elusive and computationally expensive. Recent advances in Self-Supervised Learning (SSL) frameworks capable of training ML models on unlabeled data mitigate this problem and demonstrate superior performance in computer vision and natural language processing tasks. Drawing inspiration from the developments in SSL, we introduce a new framework Crystal Twins. Our framework is a generic SSL methodology that can leverage unlabeled data for crystalline materials property prediction tasks. CT adopts a twin Graph Neural Network (GNN) and learns representations by forcing graph latent embeddings of augmented instances obtained from the same crystalline system to be similar. We implement Barlow Twins and SimSiam frameworks for self-supervised learning in CT. By sharing the pre-trained weights when fine-tuning the GNN for downstream tasks, we improve the performance of GNN on 14 challenging material property prediction benchmarks.
–
Publication: https://arxiv.org/abs/2205.01893
Presenters
-
Rishikesh Magar
Carnegie Mellon University
Authors
-
Rishikesh Magar
Carnegie Mellon University
-
Amir Barati Farimani
Carnegie Mellon University