Deep reinforcement learning for active separation control in a turbulent boundary layer
ORAL
Abstract
Active flow control to reduce the bubble of recirculation (BR) in a separated turbulent boundary layer is investigated using deep reinforcement learning (DRL). The BR is induced by imposing a wall-normal blowing and suction at the top of the domain, which generates the separation. The separation control is performed by several control surfaces in the form of rectangular jets placed upstream of the BR, alongside the streamwise direction and parallel one to each other in the spanwise direction. These jets apply a wall-normal velocity magnitude defined by the DRL agent. The actions proposed by the DRL agent are based on a partial observation of velocity components at the BR and aim to maximize the accumulated reward in time. In this case, the wall shear stress is used as a reward proxy of the BR length. Since the flow is periodic in the spanwise direction, the domain can be divided into invariant subdomains that allows us to use the multi-agent reinforcement learning technique. This technique exploits the invariants of the domain to generate multiple explorations within a single large eddy simulation. Comparison with classical control techniques to reduce the BR size is also reported, highlighting the improvements that DRL brings to this case.
–
Presenters
-
Francisco Alcántara-Ávila
KTH Royal Institute of Technology
Authors
-
Francisco Alcántara-Ávila
KTH Royal Institute of Technology
-
Bernat Font
Barcelona Supercomputing Center, Barcelona Super Computing Center - Centro Nacional de Supercomputación (BSC-CNS), Spain
-
Jean Rabault
Norwegian Meteorological Institute, Norway
-
Ricardo Vinuesa
KTH (Royal Institute of Technology), KTH Royal Institute of Technology
-
Oriol Lehmkuhl
Barcelona supercomputing center, Barcelona Super Computing Center - Centro Nacional de Supercomputación (BSC-CNS), Spain