Learning to locomote in the presence of symmetry
ORAL
Abstract
Deep reinforcement learning algorithms provide a paradigm whereby a biomorphic robot can refine a strategy for efficient locomotion based on judicious trial and error, exploiting a biologically inspired architecture for the storage of experiential knowledge. An important feature of this paradigm is its applicability to systems for which accurate mathematical models are unavailable, so that behavioral policies must be constructed directly from sensor feedback. It’s commonly the case that even when a mathematical model is unavailable for a system’s dynamics, fundamental considerations ensure that symmetries underlie these dynamics. Practical reinforcement learning in a physical setting requires a parsimonious approach to data collection and representation. This talk will discuss the use of symmetry to improve the economy with which a physical robot can learn to locomotive efficiently.
–
Presenters
-
Scott Kelly
Univ of North Carolina - Charlotte
Authors
-
Scott Kelly
Univ of North Carolina - Charlotte