Deep learning for physics: teaching neural networks to do ICF

ORAL

Abstract

Scientists are embracing machine learning models for analysis of physics data. These models are characterized by an extreme ability to adapt to detailed structures in data. However, there are relatively few ways to force deep learning models to respect known physical relationships. We describe here our efforts to develop deep neural networks for ICF that incorporate physical constraints and rules by design. We first train these new models on simulation data to capture the theory implemented in advanced simulation codes. During the training, we enforce loss functions and constraints that force predicted output to satisfy physical principles. Later, we improve, or elevate, the trained models by incorporating experimental data. The training and elevation process both improves our predictive accuracy and provides a quantitative measure of uncertainty in such predictions. We will present work using inertial confinement fusion research and experiments at the National Ignition Facility as a testbed for development. We will describe advances in machine learning architectures and methods necessary to handle the challenges of ICF science, including rich, multimodal data (images, scalars, time series) and strong nonlinearities.

Presenters

  • Brian K. Spears

    Lawrence Livermore Natl Lab

Authors

  • Brian K. Spears

    Lawrence Livermore Natl Lab

  • Jayson Dean Lucius Peterson

    Lawrence Livermore Natl Lab, Lawrence Livermore National Laboratory

  • John E Field

    Lawrence Livermore Natl Lab

  • Kelli Humbird

    Lawrence Livermore Natl Lab

  • Timo Bremer

    Lawrence Livermore Natl Lab

  • Jayaraman Jayaraman Thiagarajan

    Lawrence Livermore Natl Lab

  • Rushil Anirudh

    Lawrence Livermore Natl Lab