Learning Granular Media Avalanche Behavior for Indirectly Manipulating Obstacles on a Granular Slope
ORAL
Abstract
Legged robot locomotion on sand slopes is challenging, as the complex dynamics of granular media could cause locomotion failures such as slipping, sinking, flipping over, or getting completely stuck. A promising strategy, inspired by biological locomotors, is to strategically interact with rocks, debris, and other obstacles to facilitate movement. To provide legged robots with this ability, we present a novel approach that leverages avalanche dynamics to indirectly manipulate objects on a granular slope. We use a Vision Transformer (ViT) to process image representations of granular dynamics and robot excavation actions. The ViT predicts object movement, which we use to determine which leg excavation action to execute. We collect training data from 100 experimental trials and, at test time, deploy our trained model to plan a sequence of robot leg excavation locations. Testing results suggest that our model can accurately predict object movements and achieve a success rate > 80% in a variety of manipulation tasks with up to four obstacles, and can also generalize to objects with different physics properties. To our knowledge, this is the first paper to leverage granular media avalanche dynamics to indirectly manipulate objects on granular slopes.
–
Publication: Hu, Haodi, Feifei Qian, and Daniel Seita. "Learning Granular Media Avalanche Behavior for Indirectly Manipulating Obstacles on a Granular Slope." 8th Annual Conference on Robot Learning.
Presenters
-
Haodi Hu
University of Southern California
Authors
-
Haodi Hu
University of Southern California
-
Feifei Qian
University of Southern California
-
Daniel Seita
University of Southern California