Automatic Generation of Magnification Maps for Lensed Quasars and Supernovae Using Deep Learning
ORAL
Abstract
Better modeling the microlensing variability in light curves of lensed quasars and supernovae enhances accurate measurements of time delays and the Hubble constant along with improving our understanding of quasars structure and the stellar mass distributions in distant galaxies. In the era of Rubin LSST, there will be thousands of events that need microlensing modeling. Traditional modeling approaches use computationally-intensive ray-tracing methods to generate microlensing magnification maps. While libraries of precomputed maps now exist, they only sample the parameter space on a fixed grid, and the data volume is challenging to handle in modeling. An efficient, automated approach will be needed to enhance this process for large volume of data expected from large surveys like LSST. In this project, we have trained an Autoencoder (a type of deep-learning model) on pre-computed magnification maps to reduce their dimension and form a latent space representation while optimizing for acceptable reconstruction of the maps. We then use a Convolutional Neural Network (CNN) to connect the lensing galaxy parameters to the latent space dimension of the maps. Given the trained Autoencoder and CNN, we then can generate maps for a given set of lensing galaxy parameters in less than a second. This approach will significantly enhance the treatment of microlensing variability in analysis of light curves for lensed quasars and supernovae.
–
Presenters
-
Somayeh Khakpash
Rutgers University
Authors
-
Somayeh Khakpash
Rutgers University
-
Charles Keeton
Rutgers, The State University of New Jersey
-
Federica B Bianco
University of Delaware
-
Gregory Dobler
University of Delaware
-
Georgios Vernardos
American Museum of Natural History/ City University of New York