APS Logo

Scalable Frameworks for Reinforcement Learning for Control of Self-Assembling Materials and for Chemistry Design

ORAL

Abstract

The ExaLearn Exascale Computing Project has developed scalable frameworks for reinforcement learning (RL) to create policies to control scientific processes such as the self-assembly of block copolymers and chemical design. These policies could drastically reduce the time required to navigate large parameter spaces, optimizing experimental protocols. This accelerated search methodology may thus guide materials annealing experiments, exploration of candidate structures for battery materials, or evaluation of the configurational space for low-energy water clusters. The frameworks use various RL algorithms, environments and fast-running scientific simulations for the training process. RL training can be thought of as creating a sequence of moves in a game; at each move the player (agent) may decide to exploit previous knowledge (a policy) or explore new parameters (run a simulation). Scalability is achieved by running many RL training episodes on different nodes and aggregating models. Challenges include developing simulations, fully utilizing CPU and GPU resources on each node, and aggregating policies so as not to impede learning. We present results for full single node and preliminary multi-node computing resource utilization performance.

Presenters

  • Paul Welch

    Los Alamos National Laboratory, Theoretical Division, Los Alamos National Laboratory

Authors

  • Paul Welch

    Los Alamos National Laboratory, Theoretical Division, Los Alamos National Laboratory

  • Christine Sweeney

    Los Alamos National Laboratory

  • Malachi Schram

    Pacific Northwest National Laboratory

  • Logan Ward

    Argonne National Laboratory