Interpretability of the latent space of autoencoders
ORAL
Abstract
Autoencoders are machine-learning methods that enable a reduced-order representation of data. They consist of an encoder, which compresses the data in a latent space, and a decoder, which decompresses the data back to the original space. If only linear operations are performed during the encoding and decoding phases, an autoencoder can learn the principal components of the data. On the other hand, if nonlinear activations functions are employed, an autoencoder learns a nonlinear model of the data in the latent space. The interpretability of the latent space, however, is not yet fully established. In this work, we physically interpret the latent space with simple tools from differential geometry. The interpretation is employed on canonical turbulent flows, i.e., the Kolmogorov flow and the minimal flow unit. The results show that the autoencoder learns the optimal submanifold in which the reduced-order dynamics is well represented. This work opens opportunities for extracting physical insight from the latent space and for nonlinear model reduction.
–
Presenters
-
Luca Magri
Imperial College London; Alan Turing Institute, Department of Aeronautics, Imperial College London; The Alan Turing Institute, Imperial College London, The Alan Turing Institute, Imperial College London, Imperial College London; The Alan Turing Institute, Imperial College London, Alan Turing Institute
Authors
-
Luca Magri
Imperial College London; Alan Turing Institute, Department of Aeronautics, Imperial College London; The Alan Turing Institute, Imperial College London, The Alan Turing Institute, Imperial College London, Imperial College London; The Alan Turing Institute, Imperial College London, Alan Turing Institute
-
Nguyen Anh Khoa Doan
Delft University of Technology