Flow control with latent dynamics model-based reinforcement learning

ORAL

Abstract

Given the challenge of using classical control strategies for flow control due to the strong non-linearity and high dimensionality of fluid dynamics, Deep Reinforcement Learning has recently generated interest. Most applications thus far have used Model-Free Reinforcement Learning (MFRL) to train policies directly from CFD data. However, the intensive computational demand of MFRL, due to the high dimensionality of CFD, poses significant limitations in complex flow environments. To address this limitation, we propose a Model-Based Reinforcement Learning (MBRL) strategy, wherein the reduced model is trained from CFD data via two key tools. First, a Physics-Augmented Autoencoder learns to compress flow field snapshots to a very low dimensional latent space. Subsequently, a Latent Dynamics Model (LDM) learns to predict the dynamics in this space, thereby enabling accurate time-series forecasting of flow variables. We demonstrate the LDM's robustness and generalizability through accurate predictions in two distinct scenarios: a pitching airfoil in a highly disturbed environment and a Vertical-Axis Wind Turbine in a disturbance-free environment. We integrate the LDM into a MBRL framework applied to the disturbed airfoil scenario with the objective of minimizing the lift variation about a prescribed reference lift via pitch control. We show that our approach facilitates efficient policy learning within the latent space, significantly reducing the computational demand compared to MFRL.

Presenters

  • Zhecheng Liu

    University of California, Los Angeles

Authors

  • Zhecheng Liu

    University of California, Los Angeles

  • Jeff D Eldredge

    University of California, Los Angeles