APS Logo

Symmetry-Aware Autoencoders for Model Reduction

ORAL

Abstract

Nonlinear principal component analysis (nlPCA) via autoencoders has attracted attention in the dynamical systems community due to its larger compression rate when compared to linear principal component analysis (PCA). These model reduction methods experience an increase in the dimensionality of the latent space when the dataset these are applied to exhibits globally invariant samples due to the presence of symmetries. In this study, we introduce a novel machine learning embedding, which uses spatial transformer networks and siamese networks to account for continuous and discrete symmetries, respectively. This embedding can be employed with both linear and nonlinear methods, which we term symmetry-aware PCA and symmetry-aware nlPCA. We apply the proposed framework to datasets generated by the viscous Burgers' equation, the simulation of the flow through a stepped diffuser and the Kolmogorov Flow to showcase the capabilities for cases exhibiting only continuous symmetries, only discrete symmetries or a combination of both, respectively.

Publication: Symmetry-Aware Autoencoders for Model Reduction

Presenters

  • Simon Kneer

Authors

  • Simon Kneer

  • Taraneh Sayadi

    Sorbonne University

  • Denis Sipp

    ONERA, Onera

  • Peter J Schmid

    Imperial College London

  • Georgios Rigas

    Imperial College London