Discovering novel control strategies for turbulent flows through deep reinforcement learning
ORAL
Abstract
In this work we introduce a deep-reinforcement-learning (DRL) environment to design and benchmark control strategies aimed at reducing drag in turbulent fluid flows through a channel and over a flat plate. The environment provides a framework for computationally efficient, parallelized, high-fidelity fluid simulations, ready to interface with established DRL agent-programming interfaces. This allows for both testing existing DRL algorithms against a challenging task, and advancing our knowledge of a complex, turbulent physical system that has been a major topic of research for over two centuries, and remains, even today, the subject of many unanswered questions. The control is applied in the form of blowing and suction at the wall, while the observable state is configurable, allowing to choose different variables such as velocity and pressure, in different locations of the domain. Given the complex nonlinear nature of turbulent flows, the control strategies proposed so far in the literature are physically grounded, but too simple. DRL, by contrast, enables leveraging the high-dimensional data that can be sampled from flow simulations to design advanced control strategies. In an effort to establish a benchmark for testing data-driven control strategies, we compare opposition control, a state-of-the-art turbulence-control strategy from the literature, and a commonly used DRL algorithm, deep deterministic policy gradient (DDPG). Our results show that DRL leads to 43% and 30% drag reduction in a minimal and a larger channel (at a friction Reynolds number of 180), respectively, outperforming the classical opposition control by around 20 and 10 percentage points, respectively. We also discuss the changes of control policy for different sensing planes in the wall-normal direction, increasing Reynolds number as well as the application of the framework for zero-pressure-gradient (ZPG) turbulent boundary layers (TBLs).
–
Publication: L. Guastoni, J. Rabault, P. Schlatter, H. Azizpour and R. Vinuesa. Deep reinforcement learning for turbulent drag reduction in channel flows. Eur. Phys. J. E, 46, 27, 2023
Presenters
-
Ricardo Vinuesa
KTH (Royal Institute of Technology), KTH Royal Institute of Technology
Authors
-
Ricardo Vinuesa
KTH (Royal Institute of Technology), KTH Royal Institute of Technology
-
Luca Guastoni
KTH Royal Institute of Technology
-
Jean Rabault
Univ of Oslo
-
Hossein Azizpour
KTH Royal Institute of Technology