Fine-tuning Large Language Models for Inertial Confinement Fusion – a collaboration with NVIDIA
ORAL
Abstract
Inertial Confinement Fusion (ICF) has applications to stockpile stewardship and fusion energy. Predictive capabilities, however, are limited as ICF is a highly coupled, multi-physics, and energy-limited system. Success in this field has largely been a combination of simulations that guide experimental design and systematic semi-empirical tuning of experiments. More recently machine learning techniques have contributed to the interpretation and design of experiments. Domain specific Large Language Models offer the promise of contextually retrieving information from literature, combining them with data, interacting with subject matter experts using natural language processing, and potentially proposing hypotheses to explain measurements. In this talk, the process of developing the pre-trained and fine-tuned NVIDIA AI assisted agent for ICF is described. The goal is to develop a trusted aide that can interact collaboratively with ICF subject matter experts. The challenges associated with this project are discussed. Use cases that can potentially advance the field, using data from the OMEGA laser at the University of Rochester and the National Ignition Facility are described.
LA-UR-25-26028
LA-UR-25-26028
–
Presenters
-
Radha Bahukutumbi
Los Alamos National Laboratory, University of Rochester
Authors
-
Radha Bahukutumbi
Los Alamos National Laboratory, University of Rochester
-
David D Meyerhofer
Los Alamos National Laboratory (LANL)
-
Nathan Debardeleben
Los Alamos National Laboratory
-
Michael Lang
Los Alamos National Laboratory
-
Aastha Jhunjhunwala
NVIDIA Corporation
-
Avinash Vem
NVIDIA Corporation
-
Douglas Luu
NVIDIA Corporation
-
Prakash Gurumurthy
NVIDIA Corporation
-
Scott Halverson
NVIDIA Corporation
-
Geetika Gupta
NVIDIA Corporation
-
Kelli D Humbird
Lawrence Livermore National Laboratory
-
Brian K. Spears
Lawrence Livermore National Laboratory