APS Logo

Control by Deep Reinforcement Learning of a separated flow

POSTER

Abstract

In the closed-loop control framework, a dynamical model is often used to predict the effect of a given control action on the system. Specifically, model-based control approaches rely on a physical model based on first-principle equations is used. However, in the general case, a useful model is not always available. Besides systems whose governing equations are poorly known, there are situations where solving the governing equations is too slow with respect to the dynamics at play. While reduced-order models may help, they can lose accuracy when control is applied, resulting in poor performance. A different line of control strategy relies on a data-driven approach. No model is assumed to be known and the control command is based on measurements only. In this contribution, we consider a reinforcement learning strategy for the closed-loop nonlinear control of separated flows. Deep neural networks are used to approximate both the control objective and the control policy. We consider the flow over a 2D open cavity in the realistic settings where one relies only on a few pressure sensors at the wall. The performance of the control strategy is demonstrated on the dampening of the Kelvin-Helmholtz vortices of the shear layer.

Authors

  • Thibaut Guegan

    Institut Pprime (CNRS, Universite de Poitiers, ISAE-ENSMA), France

  • Michele Alessandro Bucci

    INRIA Saclay, France, TAU-Team, INRIA Saclay, LRI, Universite' Paris-Sud, INRIA-Saclay

  • Onofrio Semeraro

    CNRS - Universite Paris Saclay, LIMSI, CNRS, Universite' de Paris-Saclay, LIMSI-CNRS

  • Laurent Cordier

    Institut Pprime (CNRS, Universite de Poitiers, ISAE-ENSMA), France

  • Lionel Mathelin

    CNRS - Universite Paris Saclay, LIMSI, CNRS, Universite' de Paris-Saclay, LIMSI-CNRS