Towards better physics extraction in images via unsupervised custom loss shift- variational autoencoders
ORAL
Abstract
Recent advances in scanning tunneling and transmission electron microscopies (STM and STEM) have provided a source for large experimental dataset, within which lies the key information (eg. lattice periodicities, order parameter distribution, repeating structural elements, or microstructures etc) towards discovery of physics. However, accurate and maximal extraction of patterns and features from such large and complex dataset are non-trivial and require an appropriate machine learning (ML) approach. Here, we develop a shift invariance variational autoencoder (sh-VAE) with a customized loss function, in an attempt to learn more physically meaningful features. A standard sh-VAE allows for disentangling characteristic repeating features in the images, while capturing the information about shifts in position in a special latent variable. In this task, we formulate a loss function to reduce the sharp edges in the latent variable maps (other than the special latent variable), to maximize the smoothness on the length scale of atomic lattice as per the expected physical behavior. This custom loss function is finally augmented with ELBO loss with a user preference parameter, and the weighted total loss is minimized during model training process. This approach is implemented with various STEM 2D experimental data such as graphene, BiFeO3 and NiO-LSMO systems, and the results are compared with vanilla VAE and standard sh-VAE models.
–
Presenters
-
Arpan Biswas
Oak Ridge National Lab
Authors
-
Arpan Biswas
Oak Ridge National Lab
-
Sergei V Kalinin
University of Tennessee, University of Tennessee, Knoxville
-
Maxim Ziatdinov
Oak Ridge National Lab