Diffusion model based emulator for synthetic cosmological structure formation
POSTER
Abstract
Deep learning generative models have demonstrated great strengths in high quality synthetic images and various scientific tasks; diffusion models learn the target distribution via a forward Markov process by gradually adding Gaussian noise to the clean sample and reversing the Markov chain via a denoising process by a neural network-based noise estimation. In this study, we generate 3-dimensional cosmological dark matter simulation data and extract density fields using cloud-in-cell methods from various random seeds following the particle-mesh (PM) method, and subsequently use this dataset to train diffusion generative models of both unconditional and conditional nature on cosmological simulation snapshots taken from a certain redshift. We then compare their physical metrics such as power spectrum and density PDFs with ground truth to verify and benchmark the authenticity of such methods. The efficient and high-quality generation of synthetic cosmological simulations has significant utility in structure formation studies such as covariance studies or parameter inference; for the conditional diffusion models, generation of cosmological density fields can be modulated by any chosen parameter that is used for conditioning, such as redshift, dark matter density parameters, or other cosmological parameters, making it a highly flexible tool for cosmological emulation.
Presenters
-
Xiaofeng Dong
The University of Chicago
Authors
-
Junbo Peng
Emory University
-
Zhaodi Pan
Argonne National Laboratory
-
Xiaofeng Dong
The University of Chicago
-
Nesar Ramachandra
Argonne National Laboratory
-
Salman Habib
Argonne National Laboratory
-
Katrin Heitmann
Argonne National Laboratory