Parameter inference and model calibration with deep jointly-informed neural networks

ORAL

Abstract

“Deep jointly-informed neural networks” (DJINN) is a novel, automated process for determining an appropriate deep feed-forward neural network architecture and weight initialization based on decision trees. The DJINN algorithm reduces many of the challenges associated with training deep neural networks on arbitrary datasets by automatically and efficiently determining an appropriate architecture and initialization that results in accurate surrogate models. Furthermore, DJINN is readily cast into an approximate Bayesian framework, resulting in accurate and scalable models that provide quantified uncertainties on predictions.

We show how DJINN models trained on ensembles of expensive computer simulations can be calibrated with experimental data to infer likely values of unknown physical quantities, such as flux limiters and laser power multipliers.

1. K. Humbird et al, arXiv:1707.00784 (2017).

2. S. F. Khan et al, Physics of Plasmas 23, 042708 (2016).

Presenters

  • Kelli D Humbird

    Lawrence Livermore National Laboratory

Authors

  • Kelli D Humbird

    Lawrence Livermore National Laboratory

  • Jayson Dean Lucius Peterson

    Lawrence Livermore Natl Lab, Lawrence Livermore National Laboratory

  • Ryan McClarren

    University of Notre Dame

  • Jay David Salmonson

    Lawrence Livermore Natl Lab

  • Joseph M Koning

    Lawrence Livermore Natl Lab, Lawrence Livermore National Laboratory