APS Logo

Interpreted machine learning in fluid dynamics: Explaining relaminarization events in wall-bounded shear flows

POSTER

Abstract

Machine Learning (ML) is becoming increasingly popular in data-intensive research, including fluid dynamics. Traditionally, powerful ML algorithms such as neural networks or ensemble methods sacrifice interpretability, rendering them sub-optimal for tasks where understanding is essential. Here, we use the novel Shapley Additive Explanations (SHAP) algorithm, a game-theoretic approach that explains the ML model output, to ascertain the extent to which specific physical processes drive the prediction of relaminarization events in wall-bounded parallel shear flows. We use a gradient boosted tree ensemble model for the prediction, reaching up to $90\%$ accuracy for a prediction horizon of $5$ Lyapunov times in the future. The flow is described by the established nine-model model for wall-bounded parallel shear flows near the onset of turbulence, that comes with a physical and dynamical interpretation in terms of streaks, vortices and linear instabilities. Apart from the laminar profile, the mode associated with the streamwise vortex, a characteristic feature of wall-bounded parallel shear flows, is consistently playing a major role in the prediction. We thus demonstrate that explainable AI methods can provide useful and human-interpretable insights for fluid dynamics.

Authors

  • Martin Lellep

    Philipps University Marburg

  • Jonathan Prexl

    Technical University Munich

  • Moritz Linkmann

    University of Edinburgh, Univ of Edinburgh

  • Bruno Eckhardt

    Philipps University Marburg, Universitat Marburg