APS Logo

Interpreted machine learning in fluid dynamics: Explaining relaminarisation events in wall-bounded shear flows

ORAL

Abstract

Powerful machine learning (ML) methods are notoriously difficult to interpret. Here, we use ML methods to predict relaminarisation events in wall-bounded shear flows and obtain human-interpretable information through an explainable artificial intelligence method, the game-theoretic Shapley additive explanations (SHAP) algorithm (Lundberg & Lee, Advances in Neural Information Processing Systems, 4765 (2017)). For a proof of concept, we consider a low-dimensional model based on the self-sustaining process (SSP), where each data feature has a clear physical and dynamical interpretation in terms of representative features of the near-wall dynamics. SHAP determines that only the laminar profile, the streamwise vortex and a specific streak instability play a major role in the prediction of relaminarisation events. The method is applicable to larger datasets, in minimal plane Couette flow the prediction is based on proxies for linear streak instabilities. The SHAP analysis thus suggests that the break-up of the self-sustaining cycle is connected with a suppression of streak instabilities.

Publication: M. Lellep, J. Prexl, B. Eckhardt, M. Linkmann, Interpreted machine learning in fluid dynamics: Explaining relaminarisation events in wall-bounded shear flows., J. Fluid Mech., 942, A2 (2022)

Presenters

  • Moritz Linkmann

    School of Mathematic, University of Edinburgh, School of Mathematics, University of Edinburgh

Authors

  • Moritz Linkmann

    School of Mathematic, University of Edinburgh, School of Mathematics, University of Edinburgh

  • Martin Lellep

    Univ of Edinburgh, School of Physics and Astronomy, University of Edinburgh

  • Jonathan Prexl

    Department of Civil, Geo and Environmental Engineering, Technical University of Munich, Germany

  • Bruno Eckhardt

    Philipps Univ Marburg