Driving stochastic many-particle systems using deep reinforcement learning
ORAL
Abstract
Ensembles of artificially intelligent agents have in recent years been successfully employed on tasks as diverse as autonomous navigation or drug delivery. Experiments have shown that such agents may develop collective behavior such as the emergence of hierarchies. We argue that such ensembles of intelligent agents constitute a novel and interesting form of active matter that can be understood using the tools of statistical physics. Here, we study stochastic many-particle systems where the transition rates of each particle are determined by a deep neural network specific to the particle. The networks are in turn trained using reinforcement learning on each particle's past trajectory. Using a one-dimensional lattice gas as an example we demonstrate how the interplay between neural network remodelling and collective, mesoscopic processes leads to the emergence of effective interactions between particles. These effective interactions lead to symmetry-breaking above a threshold density and to rich spatio-temporal structures. Our work shows that ensembles of artificial intelligent agents exhibit intriguing collective behavior and provide a testing ground for new non-equilibrium physics.
–
Presenters
-
Adolfo Alsina
Instituto Gulbenkian de Ciencia
Authors
-
Adolfo Alsina
Instituto Gulbenkian de Ciencia
-
Onurcan Bektas
Ludwig-Maximilians-Universität München
-
Steffen Rulands
Ludwig-Maximilians-Universität München