Decentralized reinforced learning of emergent behavior in robotic matter
Invited
Abstract
Soft robots have the potential to be more robust, adaptable, and safer for human interaction than traditional rigid robots. State-of-the-art developments push these soft robotic systems towards applications such as rehabilitation and diagnostic devices, exoskeletons for gait assistance, and grippers that can handle delicate objects. However, despite these exciting developments, their inherent non-linear response limits the number of actuators that can be accurately controlled simultaneously, especially in complex or unknown environments. To enable modularly scalable and autonomous soft robots we have developed a new type of soft robot that is assembled from identical 1D building blocks with embedded pneumatic actuation, position sensing and computation. In this robotic system, motility might emerge from local interactions, rather than from a central brain. Here we shows that we are able to implement decentralized learning in this system. Using a stochastic optimization approach, each building block individually adjusts its actuation phase continuously, in order to find the fastest way to move in a predefined direction. We show that even for larger number of modules, this robotic system is still capable of learning. As a result, the system is robust to damage, as it will adjust its behaviour accordingly.
–
Presenters
-
Johannes Overvelde
FOM Inst - Amsterdam, AMOLF
Authors
-
Giorgio Oliveri
AMOLF
-
Luuk Carolus Van Laake
AMOLF
-
Johannes Overvelde
FOM Inst - Amsterdam, AMOLF