Synthetic analysis of X-ray Images using a neural network
POSTER
Abstract
X-ray imaging is widely used for ICF experiments. It is necessary to extract information such as convergence, density profiles, turbulent mixing for modeling and predictive code development. X-ray imaging analysis is a complex task since an X-ray image is a convoluted product of the X-ray source, the dynamic scene evolution, and responses of detectors. Although static images and target preparation can help with information extraction, powerful data analysis techniques provide new options for X-ray imaging in ICF and elsewhere. Here describe a machine-learning analysis technique using convoluted neural network and sparsity constraint. A training data set is used to generate a dictionary for new image interpretation and feature extraction (edge detection). The new images are excluded from the training set. To compensate for the relatively small number of images available for the training set, we introduce synthetic data from computer simulations and analytic models for neural network training. The results are compared with other techniques based on polynomial fitting and extrapolation.
Presenters
-
Bradley T Wolfe
Los Alamos National Lab
Authors
-
Bradley T Wolfe
Los Alamos National Lab
-
John L Kline
Los Alamos National Laboratory, Los Alamos Natl Lab, Los Alamos National Lab
-
Zhehui (Jeph) Wang
Los Alamos Natl Lab, Los Alamos National Lab, Los Alamos National Laboratory