Interpretable Latent Space Representation of Grad-Shafranov Equilibria
POSTER
Abstract
Using a machine learning (ML) system, it is often possible to extract underlying governing information from arbitrary data. Such a system may use an autoencoder (a type of artificial neural network (ANN)) to generate a compact latent space representation of the data in tandem with techniques for explainable artificial intelligence / explainable machine learning (XAI/XML), such as computation of global feature importance or construction of local linear trees.
In this work, as a proof-of-concept, the above framework is used for the production of interpretable representations of a small set of parameters which govern a relatively large set of data for computed Grad-Shafranov equilibria. Potential extensions of this framework for augmentation of conventional (i.e., non-ML) analysis of fusion devices is also discussed.
In this work, as a proof-of-concept, the above framework is used for the production of interpretable representations of a small set of parameters which govern a relatively large set of data for computed Grad-Shafranov equilibria. Potential extensions of this framework for augmentation of conventional (i.e., non-ML) analysis of fusion devices is also discussed.
Presenters
-
Daniel Raburn
PPPL
Authors
-
Daniel Raburn
PPPL