APS Logo

Learning flux limiters using differentiable solvers and integrating them into production CFD codes

ORAL

Abstract

We present a data-driven framework for learning optimal second-order total variation diminishing (TVD) flux limiters via differentiable simulations. In our fully differentiable finite volume solvers, the limiter functions are replaced by neural networks. By representing the limiter as a pointwise convex linear combination of the Minmod and Superbee limiters, we enforce both second-order accuracy and TVD constraints at all stages of training. Our approach leverages gradient-based optimization through automatic differentiation, allowing a direct backpropagation of errors from numerical solutions to the limiter parameters. We demonstrate the effectiveness of this method on various hyperbolic conservation laws, including the linear advection equation, the Burgers' equation, and the one-dimensional Euler equations. Remarkably, a limiter trained solely on linear advection exhibits strong generalizability, surpassing the accuracy of most classical flux limiters across a range of problems with shocks and discontinuities. The learned flux limiters can be readily integrated into existing computational fluid dynamics (CFD) codes, and the proposed methodology also offers a flexible pathway to systematically develop and optimize flux limiters for complex flow problems. We integrate the neural flux limiter into OpenFOAM and benchmark the numerical results of three-dimensional transonic flow over a NACA0012 airfoil with established experimental data. These results reinforce the practical value of our approach and demonstrate its viability for real-world CFD applications.

Publication: [1] https://arxiv.org/abs/2503.09625

Presenters

  • Chenyang Huang

    University of Michigan, Ann Arbor

Authors

  • Chenyang Huang

    University of Michigan, Ann Arbor

  • Amal S Sebastian

    University of Michigan, Ann Arbor

  • Venkat Viswanathan

    University of Michigan, Ann Arbor