Optimizing Metachronal Paddling with Reinforcement Learning
POSTER
Abstract
Metachronal paddling is the rowing-like motion many aquatic creatures perform with their limbs to propel themselves forward in a fluid. Studies have shown that metachronal paddling is a consistently optimal swim stroke across a range of Reynold's numbers, yet the design mechanics of the paddling organisms differ; for example, the number of limbs, spacing of the limbs, and flexibility of limb joints may vary by species. Examining these trait variations is imperative to designing the optimal paddler for different ranges of Reynold's numbers, however, paddling simulations become computationally expensive with increasing degrees of freedom. To mitigate this challenge, we leverage a reinforcement learning (RL) approach to test different paddler designs in fast simulations in which the paddler self-learns the optimal swim stroke. To this end, we frame the paddling problem as a Markov decision process with state and action spaces representing a discretized version of a full paddle stroke. The reward for each state-action pair is defined as the net displacement of the paddler to incentivize forward motion. The reward values are computed using hydrodynamics simulations and the optimal strokes found through RL are then compared to traditional metachronal paddling strokes.
Presenters
-
Alana A Bailey
University of California Davis
Authors
-
Alana A Bailey
University of California Davis
-
Robert D Guy
University of California, Davis