A foundation model for LArTPC events
ORAL
Abstract
Foundation models have become prominent in domains such as natural language processing (GPT, Llama, Gemini) and vision (CLIP, DALL-E). Trained on vast amounts of data using self-supervision, these models can be adapted for downstream tasks, benefiting from extensive pre-training datasets. The liquid argon time projection chamber (LArTPC) is a common detector technology in accelerator-based neutrino experiments. These experiments generate enormous quantities of globally sparse, but locally dense data, capturing particle trajectories in 3D space. We leverage self-supervised representation learning techniques to pre-train a point-cloud foundation model on a large number of simulated LArTPC events. By adapting an existing point-cloud foundation model architecture to accommodate the unique topologies of LArTPC events, we demonstrate that our model learns semantically meaningful and computationally efficient latent representations that can be used for specific tasks in data analysis like semantic segmentation.
–
Publication: A foundation model for LArTPC events, in preparation
Presenters
-
Sam Young
Stanford University
Authors
-
Sam Young
Stanford University
-
Kazuhiro Terao
SLAC