Interpretable fine-tuning of graph neural network surrogates
ORAL
Abstract
Data-based surrogate modeling for fluid flows has seen a surge in capability in recent years with the emergence of graph neural networks (GNNs). GNN-based models offer notable advantages of conventional neural network architectures due to their inherent ability to operate directly on mesh-based representations of data. This, in turn, translates into (a) an ability to model flow fields in complex geometries described by unstructured meshes, and (b) an ability to extrapolate to unseen geometries. The goal of this work is to illuminate an additional, equally critical advantage that can be provided within the GNN framework: the ability to generate interpretable latent spaces (or latent graphs). Given a pre-trained baseline GNN surrogate, we show how a fine-tuning procedure for an adaptive graph pooling module can be used to identify interpretable latent graphs optimized for various modeling tasks. More specifically, the fine-tuning procedure identifies latent graphs through a learnable node subsampling procedure — these latent graphs are characterized by connectivity matrices that identify unsteady coherent features in the flow. Emphasis is placed on showcasing the fine-tuning approach for improving (a) baseline forecasting accuracy, and (b) forecasting stability. Demonstrations are performed using unstructured flow data sourced from turbulent flow over a backward-facing step at high Reynolds numbers.
–
Presenters
-
Shivam Barwey
Argonne National Laboratory
Authors
-
Shivam Barwey
Argonne National Laboratory
-
Romit Maulik
Pennsylvania State University