Active Drag Reduction in Turbulent Open Channel Flow using Deep Reinforcement Learning
ORAL
Abstract
Deep reinforcement learning (DRL) is an optimization framework to discover control laws. It has been successfully applied to fluid dynamics, for turbulence modelling and drag reduction.
While most works in literature considered two-dimensional, simplified environments, in this work we focus on a fully-turbulent, 3D open channel flow, with uncontrolled friction Reynolds number Reτ=180. A turbulent flow introduces additional challenges because of the stochasticity of the system, and its higher dimensionality. Also, a higher number of control parameters is considered.
In DRL algorithms we refer to the controller as agent. Its control policy is modified in order to maximize a reward. In our case, we aim to reduce the overall drag in the flow, so we consider the reduction of shear stress at the wall as reward.
The control is performed by introducing blowing/suction at the wall. The agent controls the amplitude of the wall-normal velocity component imposed at the wall.
The resulting DRL policies are compared with opposition control by Choi et al. (1994), which represent the best-performing control strategy available in the literature.
While most works in literature considered two-dimensional, simplified environments, in this work we focus on a fully-turbulent, 3D open channel flow, with uncontrolled friction Reynolds number Reτ=180. A turbulent flow introduces additional challenges because of the stochasticity of the system, and its higher dimensionality. Also, a higher number of control parameters is considered.
In DRL algorithms we refer to the controller as agent. Its control policy is modified in order to maximize a reward. In our case, we aim to reduce the overall drag in the flow, so we consider the reduction of shear stress at the wall as reward.
The control is performed by introducing blowing/suction at the wall. The agent controls the amplitude of the wall-normal velocity component imposed at the wall.
The resulting DRL policies are compared with opposition control by Choi et al. (1994), which represent the best-performing control strategy available in the literature.
–
Presenters
-
Luca Guastoni
FLOW, KTH Engineering Mechanics
Authors
-
Luca Guastoni
FLOW, KTH Engineering Mechanics
-
Jean Rabault
Norwegian Metereological Institute
-
Ali Ghadirzadeh
School Elect. Eng. and Comp. Sci., KTH
-
Philipp Schlatter
KTH, FLOW, KTH Engineering Mechanics, KTH Engineering Mechanics, Royal Institute of Technology, KTH Engineering Mechanics
-
Hossein Azizpour
School Elect. Eng. and Comp. Sci., KTH
-
Ricardo Vinuesa
KTH, KTH Royal Institute of Technology, FLOW, KTH Engineering Mechanics