Using a large language model for training microrobots to swim
ORAL
Abstract
Machine learning and artificial intelligence have recently become dominant approaches in the design and optimization of robotic systems across various scales. Current research highlights the innovative application of large language models (LLMs) in industrial control (Song et al.) and in directing legged walking robots (Wang et al.). In this work, we utilize an LLM, Generative Pre-trained Transformer 4 (GPT-4), to train two prototypical microrobots to swim in viscous fluids, specifically using a few-shot learning strategy to create a concise prompt composed of only five sentences. The same concise prompt successfully guides two distinct articulated microrobots—the three-link swimmer and the three-sphere swimmer—in perfecting their signature gaits. These gaits, initially conceptualized by physicists, have now been adeptly adapted by the LLM to allow microrobots to circumvent the physical constraints inherent to micro-locomotion. We also delve into the nuances of prompt engineering, with a focus on minimizing the financial costs associated with using GPT-4.
–
Publication: Z.Xu and L. Zhu, "Training microrobots to swim by a large language model," arXiv preprint arXiv:2402.00044, 2024.
Presenters
-
ZHUOQUN XU
National University of Singapore
Authors
-
ZHUOQUN XU
National University of Singapore
-
Lailai Zhu
Natl Univ of Singapore