Conditioning deep learning on PDE parameters to generalise emulation of stochastic and chaotic dynamics
POSTER
Abstract
We introduce a probabilistic deep learning emulator for modeling stochastic and chaotic dynamical systems, conditioned on parameter values from the governing PDEs. Our approach involves pre-training on a fixed parameter domain and fine-tuning on a diverse, but crucially smaller dataset. This enables effective generalisation across a range of parameter values, maintaining robustness at interpolated values not seen during training. By incorporating local attention mechanisms, the network efficiently handles varying domain sizes, outperforming convolution kernels. This allows for computationally efficient pre-training on smaller domains, requiring limited data on larger domains to generalise to more turbulent regimes. We demonstrate our model's capabilities on quasi-geostrophic turbulence and the Kuramoto-Sivashinsky equation. The probabilistic nature of our model, along with significant computational speed-ups over traditional numerical integration, facilitates the efficient exploration of phase space and the statistical study of rare events.
Presenters
-
Ira Jeet Singh Shokar
University of Cambridge
Authors
-
Ira Jeet Singh Shokar
University of Cambridge
-
Peter H Haynes
University of Cambridge
-
Rich R Kerswell
Univ of Cambridge