Reduced-Order Models for Reinforcement Learning Control of Turbulent Plane Couette Flow
ORAL
Abstract
A few challenges in active control of turbulent flows include dealing with high-dimensional states, implementing control strategies in real time, and discovering complex control strategies. Reinforcement learning (RL) is a promising machine learning method which overcomes these challenges in fluids problems like flow around a cylinder. In RL there is an offline training phase in which the RL agent iteratively interacts with an environment to learn a control policy, which can be quickly applied in an online fashion. Unfortunately, for computationally demanding simulations, like direct numerical simulations (DNS), this training process becomes prohibitively expensive. We overcome this challenge by building a data-driven reduced-order model (ROM) of the system that we train an RL policy to control. The ROM is trained in two phases, first the dimension is reduced via an autoencoder, then the dynamics are learned using a neural ordinary differential equation. This ROM dramatically reduces dimension while maintaining high fidelity. We demonstrate this method on turbulent Couette flow controlled by two slot jets with the aim of minimizing drag and penalizing control actuation. The RL agent, trained on the model, learns a strategy to effectively relaminarize trajectories of the full DNS.
–
Presenters
-
Alec Linot
University of Wisconsin - Madison
Authors
-
Alec Linot
University of Wisconsin - Madison
-
Kevin Zeng
University of Wisconsin - Madison
-
Michael D Graham
University of Wisconsin - Madison