Model-based reinforcement learning for chaotic flow control
ORAL
Abstract
Deep reinforcement learning (DRL) has demonstrated significant potential for improving the efficiency of fluid-based systems. However, numerical solvers for high-dimensional, complex flow environments require substantial computational resources, often intractable. This is especially problematic for DRL, which typically requires numerous interactions with the environment to achieve convergence. To address sample efficiency, many model-based reinforcement learning (MBRL) methods have been proposed. However, these algorithms are primarily developed and benchmarked on classical control problems and game-like environments, which are non-chaotic. Existing research on applying MBRL to flow problems has focused mainly on the weakly chaotic regime. Consequently, the effectiveness of MBRL for highly chaotic systems remains unknown, as most real-world flow control problems involve chaotic turbulent flows. In this study, we conduct a comprehensive investigation to determine the chaotic regimes where MBRL is beneficial and identify the key parameters required to achieve optimal controller performance for fluid flows. We highlight pitfalls in using MBRL for controlling chaotic systems and propose suitable adaptations to enhance their performance. Additionally, we compare MBRL with state-of-the-art model-free algorithms.
–
Presenters
-
Priyam Gupta
Imperial College London
Authors
-
Priyam Gupta
Imperial College London
-
Max Weissenbacher
Imperial College London
-
Georgios Rigas
Imperial College London