Reinforcement Learning for Flow-Informed Flight Control
POSTER
Abstract
While flying in real-world environments, unmanned aerial systems (UAS) often encounter significant fluid disturbances that challenge the capabilities of conventional sensing and control methods. Current disturbance rejection strategies do not consider fluid interactions, instead sensing and correcting only for the resulting inertial changes. On-board flow sensing allows UAS to characterize interactions with the fluid environment, potentially enabling superior control in turbulent conditions. Sufficient characterization of the state of the surrounding flow may allow for predictive control strategies through which UAS react to fluid disturbances before inertial effects can be sensed. In this presentation, we explore the use of reinforcement learning (RL) for identifying and applying effective ``fluid-aware'' control policies in an experimental setting. A symmetric airfoil is fitted with flow sensors to model a fixed-wing UAS, and a state-of-the-art fan array wind tunnel is used to simulate realistic flow conditions for training. By developing RL strategies for ``fluid-aware'' flight control via simplified experimental models, we aim to help develop a new generation of UAS capable of superior flying in adverse flow conditions.
Authors
-
Peter Renn
Caltech
-
Morteza Gharib
Caltech, California Institute of Technology