Neural network-accelerated uncertainty quantification for fusion power plant design with FUSE
POSTER
Abstract
The FUSE integrated modeling framework combines physics, engineering, and economic models to enable rapid iteration over prototype fusion power plant designs via multi-objective optimization. In this work, we construct neural network surrogates of FUSE optimization studies to enable sensitivity analyses and uncertainty quantification over the space of 0D input parameters.
In multi-objective optimization, a genetic algorithm tests the fitness of a design resulting from a selected combination of input parameters (R0, B0, Ip, delta, Zeff, etc.) according to a set of user-defined objectives (minimize capital cost, maximize q95, maximize flattop, etc.), and tunes the input parameters over the course of generations to better meet the objectives. Resulting
designs are then filtered according to user-selected constraints (required net electric power, tritium breeding ratio, etc.), leaving a set of thousands of discrete optimized design points that meet constraints. A classifier neural network is trained using this resulting data to determine whether a particular set of input parameters will or will not produce a design that meets the optimization’s constraints. Then, a multi-layer perceptron model maps the sets of input parameters that are determined to be properly constrained to their resulting values of the optimization’s objective functions. The two-network model enables a user to interpolate between the discrete design points provided by the initial optimization study, and to study an individual design’s sensitivity to perturbations of the inputs. While the initial optimization run can take hours or days to complete, trained neural networks make predictions in fractions of seconds, allowing a user to rapidly test the gain or loss in objective fitness induced by perturbations to one or more 0D design parameters. In this work, the model’s capabilities will be demonstrated via comparative sensitivity analysis for designs employing low-temperature versus high-temperature superconductors.
In multi-objective optimization, a genetic algorithm tests the fitness of a design resulting from a selected combination of input parameters (R0, B0, Ip, delta, Zeff, etc.) according to a set of user-defined objectives (minimize capital cost, maximize q95, maximize flattop, etc.), and tunes the input parameters over the course of generations to better meet the objectives. Resulting
designs are then filtered according to user-selected constraints (required net electric power, tritium breeding ratio, etc.), leaving a set of thousands of discrete optimized design points that meet constraints. A classifier neural network is trained using this resulting data to determine whether a particular set of input parameters will or will not produce a design that meets the optimization’s constraints. Then, a multi-layer perceptron model maps the sets of input parameters that are determined to be properly constrained to their resulting values of the optimization’s objective functions. The two-network model enables a user to interpolate between the discrete design points provided by the initial optimization study, and to study an individual design’s sensitivity to perturbations of the inputs. While the initial optimization run can take hours or days to complete, trained neural networks make predictions in fractions of seconds, allowing a user to rapidly test the gain or loss in objective fitness induced by perturbations to one or more 0D design parameters. In this work, the model’s capabilities will be demonstrated via comparative sensitivity analysis for designs employing low-temperature versus high-temperature superconductors.
Presenters
-
Adriana G Ghiozzi
Aurora Fusion, General Atomics - ORAU
Authors
-
Adriana G Ghiozzi
Aurora Fusion, General Atomics - ORAU
-
Orso-Maria OM Meneghini
General Atomics
-
Tim Slendebroek
University of California, San Diego, General Atomics
-
Galina Avdeeva
General Atomics
-
Tyler B Cote
General Atomics
-
Giacomo Dose
General Atomics
-
Brian A Grierson
General Atomics
-
Jerome Guterl
General Atomics
-
Jackson Harvey
General Atomics
-
Brendan C Lyons
General Atomics
-
Joseph T McClenaghan
General Atomics
-
Tom F Neiser
General Atomics
-
Nan Shi
General Atomics
-
David B Weisberg
General Atomics
-
Min-Gu Yoo
General Atomics