Learning closure models with neural operator-embedded differentiable CFD
ORAL
Abstract
We discuss an approach using the differentiable physics paradigm that combines known physics with machine learning to develop closure models for the Navier-Stokes equations. Current ML turbulence modeling approaches often distinguish between offline a-priori training and online a-posteriori testing, resulting in additional generalization error that can be challenging to characterize. Advances in algorithmic differentiation and adjoint solvers are enabling a new class of models that embed neural networks into simulations, even during training, allowing the network to learn directly from the desired a-posteriori loss function. In parallel, neural operators that map between function spaces are gaining interest due to their discretization-invariant nature that allows for broad applicability without retraining. Thus, differentiable physics and neural operators form the ideal pairing to learn unknown closures in fluid modeling. We test our approach on a variety of fluid datasets and quantify error across a range of generalization parameters. We find that constraining models with inductive biases in the form of PDEs that contain known physics or existing closure approaches produces highly data-efficient, accurate, and generalizable models, outperforming state-of-the-art baselines. Addition of structure in the form of physics information also brings a level of interpretability to the models, potentially offering a stepping stone to the future of closure modeling.
–
Presenters
-
Varun Shankar
Carnegie Mellon University
Authors
-
Varun Shankar
Carnegie Mellon University
-
Venkat Viswanathan
Carnegie Mellon University