Invertible Neural Autoencoder for Low-Order Modeling of Homogeneous Flow Structures
ORAL
Abstract
Invertible neural architectures have attracted attention for their compactness, interpretability, and information-preserving properties. We propose the Invertible Neural Autoencoder framework FINE (Fourier-Invertible Neural Encoder), which combines invertible monotonic activation functions with spectrally constrained, invertible filters, and can be extended using invertible ResNet blocks. This architecture targets low-order modeling of homogeneous flow structures, focusing on one-dimensional nonlinear wave systems with translation symmetry and phase-rich interactions. Dimensionality is preserved across layers, except for a latent-space spectral truncation step, enabling reduced-order representations retaining shift equivariance and spectral interpretability. FINE outperforms classical linear techniques such as Discrete Fourier Transform (DFT) and Proper Orthogonal Decomposition (POD), and achieves lower reconstruction error than convolutional autoencoders, despite using significantly fewer parameters and offering a physically structured latent space. These findings suggest that invertible neural autoencoders with spectral priors offer a compelling approach for interpretable, symmetry-aware dimensionality reduction in fluid dynamics datasets.
–
Publication: https://arxiv.org/abs/2505.15329
Presenters
-
Anqiao Ouyang
Mount Boucherie Secondary School
Authors
-
Anqiao Ouyang
Mount Boucherie Secondary School
-
Hongyi Ke
San Diego State University
-
Qi Wang
San Diego State University