APS Logo

RG-Flow: A hierarchical and explainable flow model based on renormalization group and sparse prior

ORAL

Abstract

Flow-based generative models have become an important class of unsupervised learning approaches. In this work, we incorporate the key idea of renormalization group (RG) and sparse prior distribution to design a hierarchical flow-based generative model, called RG-Flow, which can separate different scale information of images with disentangle representations at each scale. We demonstrate our method mainly on the CelebA dataset and show that the disentangled representation at different scales enables semantic manipulation and style mixing of the images. To visualize the latent representation, we introduce the receptive fields for flow-based models and find receptive fields learned by RG-Flow are similar to convolutional neural networks. In addition, we replace the widely adopted Gaussian prior distribution by sparse prior distributions to further enhance the disentanglement of representations. From a theoretical perspective, the proposed method has O(log L) complexity for image inpainting compared to previous flow-based models with O(L2) complexity.

Presenters

  • Hong-Ye Hu

    University of California, San Diego

Authors

  • Hong-Ye Hu

    University of California, San Diego

  • Dian Wu

    University of California, San Diego

  • Yizhuang You

    University of California, San Diego, Department of Physics, University of California, San Diego

  • Bruno Olshausen

    Berkeley AI Research, Berkeley

  • Yubei Chen

    Berkeley AI Research, Berkeley