Interpretability of Machine Learning Models for the Reynolds Stress Tensor in Reynolds-Averaged Navier-Stokes Simulations

ORAL

Abstract

Data-driven approaches to turbulence modeling have grown in popularity because Reynolds-Averaged Navier-Stokes (RANS) models continue to be industrial workhorses and Direct Numerical Simulation databases for complex turbulent flow are becoming available for use as training sets. Applications of modern machine learning architectures have improved predictions of the Reynolds stress anisotropy tensor over standard two-equation RANS models, but suffer from black-box obscurity. Interpretable machine learning predictions are needed to understand the high-dimensional input feature space, advance physical intuition, establish confidence in model generalizability, and develop models which are robust and easy to train. In this work we apply several interpretability methods to the Tensor Basis Neural Network (TBNN) architecture developed by Ling et al., JFM, 2016. The TBNN structure is exploited to understand the physical effect of each term with respect to shifting the state of anisotropy. Sensitivity maps and importance rankings are also obtained for the input features. The methodology is first validated on a network trained to reproduce the k-ω model. Results are then presented for a turbulent square duct flow and the flow over a wall-mounted cube.

Presenters

  • Andrew J Banko

    Stanford University, Stanford Univ

Authors

  • Andrew J Banko

    Stanford University, Stanford Univ

  • David S Ching

    Stanford University, Stanford Univ

  • Julia Ling

    Citrine Informatics

  • John Kelly Eaton

    Stanford University, Stanford Univ