Non-Intrusive Reduced Order Models with Neural PDEs: The Interpretability Challenge
ORAL
Abstract
Surrogate models of partial differential equations (PDEs) are an important area of research for applications where we desire rapid, accurate predictions with low computational costs. Differentiable programming is an emerging paradigm that aims to enable the expressivity of neural networks inside PDEs, such that the learned model is intimately connected to the physics of the problem by construction. Recent efforts in differentiable programming, such as Neural PDEs, have shown promise in learning accurate parameterizations for PDEs from simulation data. However, several earth/climate applications have incomplete or partially known PDEs that need non-intrusive parameterization from observational training data. This leads to a significantly challenging learning problem, where the strengths and weaknesses of differentiable programming are less known. This work systematically studies differentiable programming-based strategies to learn such dynamics with differentiable programming in Neural PDEs. Our results show that differentiable programming as a paradigm can accurately model PDEs while surpassing vanilla neural networks. Interestingly, it succeeds even when strong assumptions are made about the missing physics while requiring lesser data and computational cost. However, we also discover that differences in numerical methods between the training data and Neural PDE have a non-trivial impact on the quality and stability of the learned model, with significant implications on the interpretability and robustness of this technique.
–
Presenters
-
Arvind T Mohan
Los Alamos National Laboratory
Authors
-
Arvind T Mohan
Los Alamos National Laboratory