Real-Time Model-Based Reinforcement Learning for Active Flow Control on the NASA Hump Model
POSTER
Abstract
Active flow control (AFC) offers promising approaches for enhancing aerodynamic performance by manipulating flow structures. The NASA wall-mounted hump model, a benchmark for separated flow control, provides an ideal testbed for investigating AFC strategies. However, optimizing control parameters in complex flow regimes remains challenging due to the high-dimensional, nonlinear nature of fluid dynamics.
Reinforcement learning (RL) has emerged as a powerful tool for solving complex control problems, but its application to AFC has been limited by partial observation and prohibitively long training times in computational fluid dynamics simulations. Model-based reinforcement learning (MBRL) addresses this challenge by learning a dynamics model concurrently with policy optimization, potentially reducing the number of required environment interactions.
We present a novel interface that enables RL agents to directly actuate synthetic jets on a wind tunnel model of the NASA hump. Our setup integrates with the 20” x 28” Shear Wind Tunnel at NASA Langley Research Center, allowing for real-time control and data acquisition. The system comprises pressure sensors, electronically-controlled valves, and an extensible environment that allows for the training of RL agents.
This framework facilitates the application of state-of-the-art MBRL algorithms to AFC, potentially accelerating the discovery of optimal control policies for flow separation mitigation. Our approach paves the way for data-efficient, adaptive flow control strategies that can be developed and validated in physical experiments, bridging the gap between simulation-based and experimental AFC research.
Reinforcement learning (RL) has emerged as a powerful tool for solving complex control problems, but its application to AFC has been limited by partial observation and prohibitively long training times in computational fluid dynamics simulations. Model-based reinforcement learning (MBRL) addresses this challenge by learning a dynamics model concurrently with policy optimization, potentially reducing the number of required environment interactions.
We present a novel interface that enables RL agents to directly actuate synthetic jets on a wind tunnel model of the NASA hump. Our setup integrates with the 20” x 28” Shear Wind Tunnel at NASA Langley Research Center, allowing for real-time control and data acquisition. The system comprises pressure sensors, electronically-controlled valves, and an extensible environment that allows for the training of RL agents.
This framework facilitates the application of state-of-the-art MBRL algorithms to AFC, potentially accelerating the discovery of optimal control policies for flow separation mitigation. Our approach paves the way for data-efficient, adaptive flow control strategies that can be developed and validated in physical experiments, bridging the gap between simulation-based and experimental AFC research.
Presenters
-
Mason Lee
Brown
Authors
-
Mason Lee
Brown
-
Jennna Eppink
NASA
-
Louis Edelman
NASA
-
Yao Chung-Sheng
NASA