Hybrid Auto-Encoder with SVD-like Convergence
ORAL
Abstract
Computational fluid dynamics involves solving large dynamical systems with millions of degrees of freedom, resulting in significant computational overhead. Linear dimensionality reduction techniques like Proper Orthogonal Decomposition (POD) are often used to create efficient representations of large-scale systems, aiding in predicting their temporal dynamics when integrated with a dynamic model in reduced space. Recent deep learning advances, such as autoencoders (AE), capture intrinsic nonlinear features for better compression and retrieval of high-fidelity information, outperforming POD at low ranks but struggling to provide rapid error convergence at higher ranks. This study aims to combine POD's linearity with AE's nonlinear feature extraction to achieve superior accuracy, robustness and convergence. Unlike hybrid approaches that simply combine POD and AE, our method introduces a learnable weighting parameter to balance contributions from both techniques, resulting in an adaptive weighted hybrid approach. We demonstrate the efficacy of this hybrid approach on various PDE datasets in fluid dynamics, including 1D Viscous Burgers, 1D Kuramoto-Sivashinsky, and 2D/3D Forced Isotropic Turbulence, achieving significant accuracy improvements over the aforementioned individual methods and reducing errors to machine precision at higher ranks. This work paves the way for high-quality reduced-order models, where model accuracy depends on efficient data compression and retrieval.
–
Presenters
-
Nithin Somasekharan
Rensselaer Polytechnic Institute
Authors
-
Nithin Somasekharan
Rensselaer Polytechnic Institute
-
Shaowu Pan
Rensselaer Polytechnic Institute