Generative Latent Diffusion Model for Stochastic Inflow Turbulence Synthesis
ORAL
Abstract
Accurate inflow conditions are essential for eddy-resolving simulations. Traditional approaches like the recycling method are effective but demand extensive computational resources due to their high-fidelity nature. Synthetic approaches, though computationally efficient, often fail to capture the intricacies of real turbulence, requiring larger computational domains for accurate physics representation. Recently, deep learning methods have emerged as competitive alternatives. However, deterministic autoregressive models like Long Short-Term Memory (LSTM) and Transformers struggle with long-term predictions and the inherent stochasticity of turbulence. Generative models like Generative Adversarial Networks (GAN) are promising but are difficult to train and lack the ability to generalize across diverse flow conditions and meshes. To address these limitations, we propose a novel generative learning approach that leverages the strengths of both conditional neural fields (CNF) and latent diffusion models (LDM). By encoding spatiotemporal features into a hidden space using CNF and generating new samples with LDM, we develop a robust, mesh-independent inlet turbulence generator that generalizes over a wide range of Reynolds numbers. This method offers an efficient and accurate solution for synthesizing inflow turbulence in eddy-resolving simulations.
–
Presenters
-
Meet H Parikh
University of Notre Dame
Authors
-
Xinyang Liu
University of Notre Dame
-
Meet H Parikh
University of Notre Dame
-
Pan Du
University of Notre Dame
-
Xiantao Fan
University of Notre Dame
-
Jian-Xun Wang
University of Notre Dame