APS Logo

Optimizing the Exa.TrkX Inference Pipeline for Manycore CPUs

POSTER

Abstract

The reconstruction of charged particle trajectories is an essential component to High-Energy Physics experiments. Recently proposed pipelines for track finding, built based on the Graph Neural Networks (GNNs), provide high reconstruction accuracy, but need to be optimized, in terms of speed, especially for online event filtering. Like other deep learning implementations, both the training and inference of particle tracking methods can be optimized to fully benefit from the GPU’s parallelism ability. However, the inference of particle reconstruction could also benefit from multicore parallel processing on CPUs. In this context, it is imperative to explore the impact of the number of cores of a CPU on the speed related performance of running inference. Using both PyTorch and Facebook AI Similarity Search (Faiss) library multiple CPU threads capability combined with the weakly connected components algorithm result in faster latency times for the inference pipeline. This tracking pipeline based on the Graph Neural Networks (GNNs) is evaluated on multi-core Intel Xeon Gold 6148s Skylake and Intel Xeon 8268s Cascade Lake CPUs. Computational time are measured and compared using a range of different cores per task. The experiments show that the multi-core parallel execution outperforms the sequential one.

Presenters

  • Alina Lazar

    Youngstown State University

Authors

  • Alina Lazar

    Youngstown State University