APS Logo

Comparing Generalization Performances of Quantum and Classical Generative Models

ORAL

Abstract

Generating novel samples with high quality is the most desirable feature in generative model tasks. This property can be very beneficial to a wide range of applications including molecular discovery and combinatorial optimization. Recently, a well-defined framework [1] has been proposed to quantify generalization for different generative models on an equal footing. In this work, we aim to build on top of these results to compare the generalization performances of quantum and classical generative models. On the quantum side, we use Quantum Circuit Born Machines (QCBMs), which are known for their ability to model complex probability distributions, and which can be implemented on near-term quantum devices. On the classical side, we use different generative models including autoregressive recurrent neural networks, which are known to be universal approximators of sequential data and have promoted significant progress in natural language processing. In our experiments, we choose a synthetic but application-inspired dataset as test bed [2]. Our results show that by introducing different rules of comparing our generative models, we can obtain different results, that can sometimes yield an advantage of quantum over classical models.

[1] Gili, Mauri, and Perdomo-Ortiz, “Evaluating generalization in classical and quantum generative models,” arXiv:2201.08770.

[2] Gili, Hibat-Allah, Mauri, Ballance, and Perdomo-Ortiz, “Do Quantum Circuit Born Machines Generalize?,” arXiv:2207.13645.

Presenters

  • Marta Mauri

    Zapata Computing Canada

Authors

  • Mohamed Hibat-Allah

    University of Waterloo/Vector Institute/Zapata Computing

  • Marta Mauri

    Zapata Computing Canada

  • Manuel S Rudolph

    Zapata Computing Inc./ EPFL

  • Alejandro Perdomo-Ortiz

    Zapata Computing Inc, Zapata Computing