LAT-NET++: Compressing Fluid Simulations using Deep Neural Networks
ORAL
Abstract
We present extensions and improvements of our previous work Lat-Net [0], a deep learning based method to emulate Lattice Boltzmann fluid simulations for reduced computation and memory usage. Our first improvement is to add active learning in the training process which allows intelligent sampling of the train set. Second, we decouple Lat-Net from the Lattice Boltzmann Method allowing our method to be used in conjunction with other flow solvers. Third, we conduct rigorous tests of our method by looking at various statistical properties of the predicted flow. In addition to this, we present a method to optimize parameters of large eddy simulations such as the Smagorinsky constant. Following a similar structure as Lat-Net, we treat these constants as trainable parameters and optimize them with gradient descent. This approach can be viewed either as heavily constraining Lat-Net with the underlying physics of the flow solver or a data driven method to optimize parameters of sub-grid scale models.
–
Presenters
-
Oliver Hennigh
Los Alamos National Laboratory
Authors
-
Oliver Hennigh
Los Alamos National Laboratory
-
Michael Chertkov
Los Alamos National Laboratory, Los Alamos Natl Lab