Deep Reinforcement Learning for Active Drag Reduction in Wall Turbulence
ORAL
Abstract
In this work, a numerical simulation is designed as a first step to apply DRL control in wall-bounded turbulent flows. A two-dimensional channel with a recirculation bubble on the lower wall is considered as first test case. The goal of the control is to minimize the size of the recirculation region, also called reattachment length. Since the control introduces perturbations in the velocity field, the reattachment length is measured using moving average over time.
Existing state-of-the-art DRL algorithms are tested in this setting in order discover novel control strategies, beyond traditional opposition control. In particular, we use proximal policy optimization (PPO) to determine the time variation of the amplitude of a volume forcing positioned upstream of the bubble. The magnitude of the forcing is limited in order to force the agent to learn an energy-efficient policy for the control.
Once an effective control policy is obtained in 2D, we will consider the more challenging case of drag reduction in 3D fully-turbulent channel flow.
–
Presenters
-
Luca Guastoni
SimEx/FLOW, KTH Engineering Mechanics
Authors
-
Luca Guastoni
SimEx/FLOW, KTH Engineering Mechanics
-
Ali Ghadirzadeh
Robotics, Perception and Learning (RPL), KTH Royal Institute of Technology
-
Jean Rabault
Norwegian Meteorological Institute, University of Oslo
-
Philipp Schlatter
SimEx/FLOW, KTH Engineering Mechanics, Royal Institute of Technology, Stockholm, Sweden, KTH Royal Institute of Technology, SimEx/FLOW, KTH Engineering Mechanics
-
Hossein Azizpour
Robotics, Perception and Learning (RPL), KTH Royal Institute of Technology, KTH Royal Institute of Technology
-
Ricardo Vinuesa
SimEx/FLOW, KTH Engineering Mechanics, Royal Institute of Technology, Stockholm, Sweden, KTH Royal Institute of Technology, KTH, SimEx/FLOW, KTH Engineering Mechanics