Mode-Assisted Joint Training of Deep Boltzmann Machines
ORAL
Abstract
The deep extention of the more popular restricted Boltzmann machine (RBM), known as deep Boltzmann machines (DBMs), are expressive machine learning models which can serve as compact representations of complex probability distributions. However, jointly training DBMs in the unsupervised setting has proven to be a formidable task. A recent technique we have proposed [1], called mode-assisted training, has shown success in improving the unsupervised training of RBMs. Here we show that indeed the performance gains of mode-assisted training translate to the DBM as well, compared to the baseline approach based exclusively on Gibbs sampling. Furthermore, we find evidence that DBMs trained with the mode-assisted algorithm can represent the same data set with fewer total weights compared to RBMs. We perform a comparison on small synthetic data sets where exact log-likelihoods are computed, as well as the popular MNIST standard.
[1] H. Manukian, et al. Comm. Phys. 3, 150 (2020)
[1] H. Manukian, et al. Comm. Phys. 3, 150 (2020)
–
Presenters
-
Haik Manukian
University of California, San Diego
Authors
-
Haik Manukian
University of California, San Diego
-
Massimiliano Di Ventra
University of California, San Diego, Department of Physics, University of California San Diego