Adjoint-based neural network optimization for sub-grid-scale combustion modeling
ORAL
Abstract
We consider sub-grid-scale models that leverage machine learning methods. Unlike a direct approach, in which training of the machine learning model attempts to reduce the mismatch between the sub-grid-scale model and an accepted model (e.g., DNS), our approach is to expose the training process to the governing equations by using the adjoint equations. Specifically, we embed a standard deep neural network model into the governing equations and train by backpropagating the model itself and solving the adjoint equations, yielding end-to-end gradients of the neural network parameters. This leverages the physics embodied in the governing equations, which provides greater robustness to the learned model. In addition, the training target is distinct from the closure, thereby allowing training for any flow observable. The approach has been demonstrated for simulation of isotropic turbulence and free-shear-flow turbulence; we extend it to the greater coupled-physics challenge of a planar premixed flame propagating through isotropic turbulence.
–
Presenters
-
Seungwon Suh
University of Illinois at Urbana-Champai
Authors
-
Seungwon Suh
University of Illinois at Urbana-Champai
-
Jonathan F MacArt
University of Notre Dame
-
Luke Olson
University of Illinois at Urbana-Champaign
-
Jonathan B Freund
University of Illinois at Urbana-Champai, University of Illinois, Urbana-Champaign