Benchmarking AI-evolved cosmological structure formation and expanding dimensions through parallelization frameworks
ORAL
Abstract
The potential of deep learning-based image-to-image translations has recently attracted significant attention, and serves as a potentially powerful alternative to cosmological simulations, useful in contexts such as covariance studies, investigations of systematics, and cosmological parameter inference. To investigate various aspects of learning-based cosmological mappings, we choose two approaches for generation of cosmological matter fields as datasets: the analytical prescription provided by the Zel'dovich approximation, and the numerical N-body Particle-Mesh method. A comprehensive list of metrics is considered, including higher-order correlation functions, conservation laws, topological indicators, and statistical independence of density fields. We find that a U-Net approach performs well only for some of these physical metrics.
In addition, we develop strategies to expand the available dynamical range of deep neural networks' output. While conventional studies often showcase the neural network predictions within modest spatial dimensions and particle counts, their practical use requires significantly larger box sizes and number of particles to yield physically relevant predictions for cosmological structure formation. To achieve this, we harness data and model parallelism frameworks during model training, and a split-and-recombine approach at model deployment for the large simulation box. We are able to achieve box sizes multiple times larger than what is typically found in the existing literature. This opens up the possibility of overcoming a number of computational bottlenecks.
In addition, we develop strategies to expand the available dynamical range of deep neural networks' output. While conventional studies often showcase the neural network predictions within modest spatial dimensions and particle counts, their practical use requires significantly larger box sizes and number of particles to yield physically relevant predictions for cosmological structure formation. To achieve this, we harness data and model parallelism frameworks during model training, and a split-and-recombine approach at model deployment for the large simulation box. We are able to achieve box sizes multiple times larger than what is typically found in the existing literature. This opens up the possibility of overcoming a number of computational bottlenecks.
–
Publication: A Full-length Journal Planned paper in preparation;<br>Previous conference paper: https://arxiv.org/pdf/2112.05681.pdf
Presenters
-
Xiaofeng Dong
The University of Chicago
Authors
-
Xiaofeng Dong
The University of Chicago
-
Nesar Ramachandra
Argonne National Laboratory
-
Azton Wells
Argonne National Laboratory
-
Michael Buehlmann
Argonne National Laboratory
-
Salman Habib
Argonne National Laboratory
-
Katrin Heitmann
Argonne National Laboratory