APS Logo

Closed-loop optimal control for shear flows using reinforcement learning

POSTER

Abstract

Numerous research efforts have been devoted to the application of control theory to fluid flows in the last decades. Despite some success in the application of model-based techniques, limitations imposed by the model often result in moderate performance in actual conditions. A possible turnaround is represented by fully data-driven methods where a physical model is not employed. Reinforcement Learning (RL) algorithms allow such a strategy while preserving optimality of the control solutions. This class of algorithms can be regarded as a fully data-driven counterpart of the discrete-in-time optimal control strategies based on the Bellman equation. When neural networks are employed as the approximation format, the framework is referred to as deep RL (DRL). In this contribution, we clarify the connection between RL and optimal control by our recent results obtained for the control of the Kuramoto-Sivashinsky (KS) equation. We focus our attention on the application of the Deep Deterministic Policy Gradient. We show that, by means of localized actuation and partial knowledge of the state, it is possible to control the KS in its chaotic regime. These results will be put in perspective by comparing the DRL policy with standard optimal controllers.

Authors

  • Onofrio Semeraro

    CNRS - Universite Paris Saclay, LIMSI, CNRS, Universite' de Paris-Saclay, LIMSI-CNRS

  • Michele Alessandro Bucci

    INRIA Saclay, France, TAU-Team, INRIA Saclay, LRI, Universite' Paris-Sud, INRIA-Saclay

  • Lionel Mathelin

    CNRS - Universite Paris Saclay, LIMSI, CNRS, Universite' de Paris-Saclay, LIMSI-CNRS