APS Logo

Learning Gibbs distributions from metastable states

ORAL

Abstract

Algorithms or systems whose equilibrium state is defined by a Gibbs distribution often produce data from a metastable state instead. Therefore, it becomes important to develop machine learning methods that can learn the physics of the true model given data from such metastable states. We show that single-variable conditionals of metastable states of time reversible Markov chains that satisfy a strong metastability condition are on average very close to those of the equilibrium distribution. This holds even when the metastable state deviates significantly from the true model in terms of global metrics like average energy or total variation distance. This property allows us to learn the true model using a conditional likelihood based estimator, even when the samples come from a metastable distribution concentrated in a small region of the state space. Explicit examples of such metastable states can be constructed from regions that effectively bottleneck the probability flow and cause poor mixing of the Markov chain. For the specific case of the Ising model, we extend our results to further rigorously show that data coming from metastable states can be used to learn the parameters of the energy function and recover the structure of the model.

Publication: Jayakumar, A., Lokhov, A. Y., Misra, S., & Vuffray, M. (2024). Discrete distributions are learnable from metastable samples. arXiv preprint arXiv:2410.13800.

Presenters

  • Abhijith Jayakumar

    Los Alamos National Laboratory (LANL)

Authors

  • Abhijith Jayakumar

    Los Alamos National Laboratory (LANL)

  • Andrey Y Lokhov

    Los Alamos National Laboratory (LANL), Los Alamos National Laboratory

  • Sidhant Misra

    Los Alamos National Laboratory (LANL)

  • Marc Vuffray

    Los Alamos National Laboratory (LANL)