Improving turbulence control through explainable deep learning
ORAL
Abstract
Turbulent-flow control aims to develop strategies that effectively manipulate fluid systems, such as the reduction of drag in transportation and enhancing energy efficiency, both critical steps towards reducing global CO2 emissions. Deep reinforcement learning (DRL) offers novel tools to discover flow-control strategies, which we combine with our knowledge of the physics of turbulence. We integrate explainable deep learning (XDL) to objectively identify the coherent structures containing the most informative regions in the flow, with a DRL model trained to reduce them. The trained model targets the most relevant regions in the flow to sustain turbulence and produces a drag reduction which is higher than that of a model specifically trained to reduce the drag, while using only half its power consumption. Moreover, the XDL model results in a better drag reduction than other models focusing on specific classically identified coherent structures. This demonstrates that combining DRL with XDL can produce causal control strategies that precisely target the most influential features of turbulence. By directly addressing the core mechanisms that sustain turbulence, our approach offers a powerful pathway towards its efficient control, which is a long-standing challenge in physics with profound implications for energy systems, climate modeling and aerodynamics.
–
Publication: https://arxiv.org/pdf/2504.02354
Presenters
-
Miguel Beneitez
University of Manchester
Authors
-
Miguel Beneitez
University of Manchester
-
Andrés Cremades Botella
KTH Royal Institute of Technology
-
Luca Guastoni
Technical University Munich
-
Ricardo Vinuesa
University of Michigan, KTH Royal Institute of Technology