Using reinforcement learning to automate mesh management for HYDRA simulations
POSTER
Abstract
Multi-physics HYDRA simulations for inertial confinement fusion (ICF) experiments at the National Ignition Facility use mesh relaxation directives to manage the state of the arbitrary Lagrangian-Eulerian (ALE) mesh and prevent entanglement. The difficulty of anticipating when and why mesh tangles occur and crash a simulation has historically required laborious manual intervention as well as a strategy to play it safe with over-relaxation at the expense of fidelity. An automated solution that can adapt and learn new situations would improve robustness and fidelity of simulations and save significant user time. To this end we have developed an unsupervised reinforcement learning method for managing ALE. It is made of two deep learning neural nets. The first is a convolutional neural net (CNN) that looks at patches of the mesh and is trained to predict how that mesh would evolve without intervention; it returns an image of the future of the mesh patch. The second net is a Soft Actor-Critic (SAC) deep reinforcement learning algorithm that learns how to apply mesh relaxation thresholds that receive the highest reward, which is designed to improve the mesh quality the most with the least intervention. As the simulation runs, this system identifies problem areas in the mesh and applies a relaxation policy to improve the mesh condition with minimal intervention.
Presenters
-
Jay D Salmonson
Lawrence Livermore Natl Lab
Authors
-
Jay D Salmonson
Lawrence Livermore Natl Lab
-
Christopher K Yang
University of California, Berkeley
-
Chris V Young
Lawrence Livermore National Laboratory, Lawrence Livermore Natl Lab
-
Joseph M Koning
Lawrence Livermore Natl Lab