End-to-End Differentiability and Tensor Processing Unit Computing to Accelerate Materials’ Inverse Design
POSTER
Abstract
Simulations have revolutionized material design. However, although simulations excel at mapping an input material to its output property, their direct application to inverse design (i.e., mapping an input property to an optimal output material) has traditionally been limited by their high computing cost and lack of differentiability, so that simulations are often replaced by surrogate machine learning models in inverse design problems. Here, we introduce a computational inverse design framework relying on end-to-end differentiable simulations that addresses these challenges. Importantly, this pipeline leverages for the first time the power of tensor processing units (TPU)—an emerging family of dedicated chips, which, although they are specialized in deep learning, are flexible enough for intensive scientific simulations.
Presenters
-
Mathieu Bauchy
University of California, Los Angeles
Authors
-
Han Liu
University of California, Los Angeles
-
Yuhan Liu
University of California, Los Angeles
-
Mathieu Bauchy
University of California, Los Angeles