Machine Learning-based Diagnostics Optimization
POSTER
Abstract
Optimization of the diagnostics designs for HED experiments and radiography is a formidable challenge considering the immense space of tunable parameters involved. Machine Learning (ML) algorithms are a powerful tool for exploring large parameter spaces and can provide solutions that lead to diagnostics designs that subject experts have not previously considered.
To demonstrate this capability, we propose to optimize a Filter Stack Spectrometer (FSS) using ML. Each stack consists of 10 to 20 filters of varying materials and thicknesses, so the large parameter combination space is representative of the problem discussed here.
The optimization consists of three steps: generating synthetic data from experimental conditions for a given FSS design with a forward model; training a neural network (NN) architecture to reconstruct the experimental conditions from synthetic data; and performing an optimization search loop driven by the reconstruction errors of the NN architecture.
Here we present the results of training a NN based on the Transformer architecture [1] using synthetic data from an established forward model. The trained NN will be later incorporated in an optimization loop performed by ML algorithms such as a genetic algorithm and a generative model with deep reinforcement learning. These algorithms have demonstrated to be effective in similar optimizations [2], in wavefront optimization for coherent control of plasma dynamics [3], and in novel molecule design [4].
To demonstrate this capability, we propose to optimize a Filter Stack Spectrometer (FSS) using ML. Each stack consists of 10 to 20 filters of varying materials and thicknesses, so the large parameter combination space is representative of the problem discussed here.
The optimization consists of three steps: generating synthetic data from experimental conditions for a given FSS design with a forward model; training a neural network (NN) architecture to reconstruct the experimental conditions from synthetic data; and performing an optimization search loop driven by the reconstruction errors of the NN architecture.
Here we present the results of training a NN based on the Transformer architecture [1] using synthetic data from an established forward model. The trained NN will be later incorporated in an optimization loop performed by ML algorithms such as a genetic algorithm and a generative model with deep reinforcement learning. These algorithms have demonstrated to be effective in similar optimizations [2], in wavefront optimization for coherent control of plasma dynamics [3], and in novel molecule design [4].
References
[1] A. Vaswani et al., Advances in Neural Information Processing Systems 30, (2017).
[2] Honghu Song et al., 2023 JINST 18 P03012.
[3] He, Z.-H. et al., Nature Communications 6:7156
[4] Popova et al., Sci. Adv. 2018;4:eaap7885
Presenters
-
Mariana Alvarado Alvarez
Los Alamos National Laboratory
Authors
-
Mariana Alvarado Alvarez
Los Alamos National Laboratory
-
Bradley T Wolfe
Los Alamos National Laboratory
-
Tim Wong
Los Alamos National Laboratory
-
Steven H Batha
Los Alamos Natl Lab
-
David P Broughton
Los Alamos National Laboratory
-
Chengkun Huang
Los Alamos Natl Lab, Los Alamos National Laboratory, Los Alamos, NM 87544, USA
-
Robert E Reinovsky
Los Alamos Natl Lab
-
Jeph Wang
LANL, Los Alamos National Laboratory