Gradient-Enhanced Reinforcement Learning for Turbulence Control
ORAL
Abstract
Automatic differentiation is a key enabler of modern machine learning, making it possible to efficiently compute the gradients required to tune neural network parameters to fit data. The same principles have motivated the creation of automatically differentiable simulation environments, which are driving progress in a range of fields including computational physics and robotics. The power of differentiable simulators comes from their ability to compute gradients of output quantities of interest with respect to tunable input variables, even with complex physical processes in the middle. These gradients can then be integrated into gradient-based optimization schemes. However, this benefit has not yet been fully realized in the field of fluid dynamics, due to the lack of differentiable fluid simulators with control capabilities. In this work, we demonstrate the benefit of incorporating differentiable fluid dynamics simulators into reinforcement learning for turbulence control. Specifically, we show that leveraging differentiability improves sample efficiency, and also discovers control policies that would not otherwise be realized by classical reinforcement learning. Our approach is demonstrated on two fluid environments with different challenges: the chaotic two-dimensional Kolmogorov flow, with the objective of suppressing all extreme energy dissipation events; and a three-dimensional turbulent flow in a channel, with the goal of drag reduction. Our method achieves effective control laws with minimal training.
–
Presenters
-
Sajeda Mokbel
University of Washington
Authors
-
Sajeda Mokbel
University of Washington
-
Christian Lagemann
University of Washington
-
Esther Lagemann
AI Institute in Dynamic Systems, University of Washington
-
Steven L Brunton
University of Washington