Computational cost - accuracy comparison for machine learned interatomic models across hardware
ORAL
Abstract
Predicting atomic level behavior and mechanisms in materials is increasingly using complex, machine learned (ML), and data-driven interatomic models. This has and will continue to enable scientific discovery with more accurate and flexible atomic descriptions than their traditional, empirical counterparts; however, these models are also generally much more computationally expensive. In light of the continued conversion towards GPUs (and hybrid CPU/GPU) in scientific computing and particularly exascale computing, we demonstrate performance portable ML interatomic models, including Behler-style neural network (NNP) and spectral neighbor analysis (SNAP) potentials, employing the Kokkos programming model and the Co-design center for Particle Applications (CoPA) Cabana particle library. We discuss the strategies for improving performance across architectures and hardware vendors. In addition, we discuss plans for additional performance portable ML interatomic models and the potential pathways for others. This includes interfacing with existing codes, emphasizing kernel-based code, and increasing exposed parallelism at multiple levels.
–
Presenters
-
Sam Reeve
Oak Ridge National Lab
Authors
-
Sam Reeve
Oak Ridge National Lab
-
Kashyap Ganesan
UC Davis
-
Saaketh Desai
Purdue University
-
James Belak
Lawrence Livermore National Lab, Lawrence Livermore Natl Lab