APS Logo

Symmetry Reduction for Deep Reinforcement Learning Active Flow Control

POSTER

Abstract

Deep reinforcement learning (DRL), a data-driven model-free method to approximate optimal control policies with neural networks (NN), has become a prospective avenue for developing high dimensional active flow control solutions. Many geometries of interest for flow control exhibit continuous and discrete symmetries, which, when combined with spatially fixed actuators, implicitly requires the NN to learn sub-policies to account for each symmetry, leading to hampered performance. We describe a method for circumventing this issue by framing the DRL problem in a discrete-symmetry invariant subspace and test it in minimizing the dissipation for solutions of the Kuramoto-Sivashinsky equation, a system with translational and reflection symmetries that exhibits self-sustained spatiotemporal chaos. We accomplish this by reducing the symmetries of state observations prior to input into the NN, followed by a reintroduction of those respective symmetries to the output actuations. We demonstrate that our method yields substantial improvement in training data efficiency, policy robustness, and policy efficacy compared to the naive implementation of DRL. Finally, we observe the learned policy quickly drives the system to a low dissipation state and maintains it indefinitely.

Authors

  • Kevin Zeng

    University of Wisconsin - Madison

  • Michael D. Graham

    University of Wisconsin - Madison, University of Wisconsin-Madison