A Multi-Modal Implicit Neural Representation Method for Dimension Reduction of Spatiotemporal Flow Data

ORAL

Abstract

Estimating large-scale flow field data in modern engineering can be computationally prohibitive. Reduced Order Models (ROMs) accelerate flow simulations by compressing data into lower-dimensional spaces. Traditional dimensional reduction methods, such as Proper Orthogonal Decomposition (POD), Dynamic Mode Decomposition (DMD), and Spectral POD (SPOD), have proven effective but struggle with convective-dominant chaotic flows and instability. Machine learning approaches like Convolutional Auto-Encoders (CAEs) offer nonlinear encoding capabilities but fall short in predicting coherent flow structures and lack compatibility with complex irregular domains. Recently, Implicit Neural Representation (INR) has shown promise, with high accuracy and compression ratios, but still faces challenges with chaotic/turbulent flow predictions. To address these issues, we propose an innovative multi-modal INR-based data compression model. This model constructs multiple essential modes for comprehensive flow space coverage and adaptively changes modes according to the current time step, enhancing prediction accuracy for unprecedented flow features. We compare our model with state-of-the-art encoders, including POD, CAE, and INR, across several cases to demonstrate its effectiveness.

Presenters

  • Pan Du

    University of Notre Dame

Authors

  • Pan Du

    University of Notre Dame

  • Jian-Xun Wang

    University of Notre Dame