Benchmarking the Generalization of Quantum-Inspired and Classical Generative Models
ORAL
Abstract
Recent work has demonstrated the effectiveness of tensor networks as "quantum-inspired" generative models for modeling and sampling from unknown probability distributions, but it remains an open question how good tensor networks are at producing new, high-quality samples (i.e., generalization) compared to the various other classes of generative models. Gili et al. (2022) showed that tensor networks may indeed generalize better than classical generative adversarial networks (GANs), in the case of a cardinality-constrained discrete distribution. In this work, we conduct the first comprehensive study of the generalization capabilities of tensor networks against a wide range of classical generative models in addition to GANs, such as autoregressive models and variational autoencoders. Furthermore, we explore the instances where tensor networks seem to demonstrate an advantage in terms of generalization over the classical neural networks, and cases where they do not. Our goal with this study is to provide insight into instances where different classes of generative models, including the tensor networks, perform well, thus advancing on the path towards practical quantum-inspired and quantum advantage.
–
Presenters
-
Brian Chen
University of Michigan / Zapata Computing
Authors
-
Brian Chen
University of Michigan / Zapata Computing
-
Mohamed Hibat-Allah
University of Waterloo/Vector Institute/Zapata Computing
-
Javier Lopez-Piqueres
University of Massachusetts Amherst, University of Massachusetts Amherst/Zapata Computing Inc, University of Massachusetts Amherst / Zapata Computing
-
Marta Mauri
Zapata Computing Canada
-
Daniel Varoli
Zapata Computing
-
Francisco J Fernandez Alcazar
Zapata Computing
-
Brian Dellabetta
Zapata Computing
-
Alejandro Perdomo-Ortiz
Zapata Computing Inc, Zapata Computing