Enhancing Iterative Solver Capabilities and GPU Utilization in M3D-C1
POSTER
Abstract
We present ongoing efforts for improving the iterative capabilities of the algebraic solver for the velocity advance of the M3D-C1 extended MHD code [1], including a focus on adapting to evolving HPC architectures, notably GPU accelerators. Currently, the M3D-C1 solver utilizes GMRES with block-Jacobi preconditioning, employing direct solvers for degrees of freedom within planes of constant toroidal angle.
Our efforts encompass two concurrent strategies. Firstly, through the Center for Edge of Tokamak Optimization SciDAC, we are exploring iterative methods that exploit massive concurrency. This includes preconditioners, such as parallel incomplete LU factorization with threshold-based drop tolerances, intended to replace in-plane direct solvers to facilitate GPU platform migration. A key strategy being investigated is the use of the FieldSplit preconditioner in PETSc [2], based on solves for individual scalar components of the velocity field. Preconditioning options from PETSc [2] and gingko [3] libraries are being evaluated for the individual scalar component solves.
Secondly, we are developing geometric multigrid methods in the toroidal direction, featuring semi-coarsening and plane smoothing, as an alternative to block-Jacobi preconditioning. This approach also requires in-plane solvers for both smoothing and the coarse grid, potentially benefiting from the outcomes of our first effort.
[1] S.C. Jardin et al, Comput. Sci. Discov., 5, 014002 (2012)
[2] https://petsc.org/
[3] https://github.com/ginkgo-project/ginkgo
Our efforts encompass two concurrent strategies. Firstly, through the Center for Edge of Tokamak Optimization SciDAC, we are exploring iterative methods that exploit massive concurrency. This includes preconditioners, such as parallel incomplete LU factorization with threshold-based drop tolerances, intended to replace in-plane direct solvers to facilitate GPU platform migration. A key strategy being investigated is the use of the FieldSplit preconditioner in PETSc [2], based on solves for individual scalar components of the velocity field. Preconditioning options from PETSc [2] and gingko [3] libraries are being evaluated for the individual scalar component solves.
Secondly, we are developing geometric multigrid methods in the toroidal direction, featuring semi-coarsening and plane smoothing, as an alternative to block-Jacobi preconditioning. This approach also requires in-plane solvers for both smoothing and the coarse grid, potentially benefiting from the outcomes of our first effort.
[1] S.C. Jardin et al, Comput. Sci. Discov., 5, 014002 (2012)
[2] https://petsc.org/
[3] https://github.com/ginkgo-project/ginkgo
Presenters
-
Benjamin J Sturdevant
Princeton Plasma Physics Laboratory
Authors
-
Benjamin J Sturdevant
Princeton Plasma Physics Laboratory
-
Jin Chen
PPPL, Princeton Plasma Physics Laboratory
-
Mark F Adams
Lawrence Berkeley National Laboratory
-
Chang Liu
Princeton Plasma Physics Laboratory
-
Fatima Ebrahimi
Princeton Plasma Physics Laboratory, Princeton Plasma Physics Laboratory (PPPL)