Deep Learning Force Manifolds from the Physical Simulation of Robotic Paper Folding
ORAL
Abstract
The focus on intelligent robotic manipulation of slender deformable objects has grown in popularity in recent years due to its numerous real world applications. A vast majority of prior works have used data-driven control policies that often rely purely on visual feedback. The lack of physical insight on such methods result in a critical lack of generality in terms of material, geometric, and/or environmental (e.g. friction) changes. In regards to this, we propose a novel control strategy for the difficult task of single manipulator paper folding with complete generality. Physically-accurate simulation and machine learning are combined to train fast and accurate deep neural networks, referred to as "force manifolds", capable of predicting the external forces of the paper given a grasp position. Scaling analysis is performed to obtain a problem formulation independent of material and geometric properties. Path planning is then used over the manifold to produce optimal robot folding trajectories. The high inference speed of our train models allow for real-time visual feedback resulting in closed-loop sensorimotor control. We demonstrate significant improvement over natural folding strategies for papers of various materials and shapes through extensive real world experiments.
–
Publication: Deep Learning of Force Manifolds from the Simulated Physics of Robotic Paper Folding - planned paper for IEEE Transactions on Robotics
Presenters
-
Andrew Choi
University of California, Los Angeles
Authors
-
Andrew Choi
University of California, Los Angeles
-
Dezhong Tong
UCLA Foundation, University of California, Los Angeles
-
Demetri Terzopoulos
University of California, Los Angeles
-
Jungseock Joo
University of California, Los Angeles
-
Mohammad Khalid Jawed
University of California, Los Angeles, UCLA