Stochastic Gradient Descent for Hybrid Quantum-Classical Optimization
ORAL
Abstract
Gradient-based methods for hybrid quantum-classical optimization typically rely on expectation values with respect to the outcome of parameterized quantum circuits. In this work, we investigate the fact that the estimation of these quantities on quantum hardware leads to a form of stochastic gradient descent. In many relevant cases estimating expectation values with k measurements results in optimization algorithms whose convergence properties can be rigorously understood, for any value of k≥1. Moreover, in many settings the required gradients can be expressed as linear combinations of expectation values and we show that in these cases k-shot expectation value estimation can be combined with sampling over terms of the linear combination, to obtain doubly stochastic gradient descent. For all algorithms we prove convergence guarantees. Additionally, we explore numerically these methods on benchmark VQE, QAOA and quantum-enhanced machine learning tasks and show that treating the stochastic settings as hyper-parameters allows for significantly fewer circuit executions.
–
Presenters
-
Frederik Wilde
Freie Universität Berlin
Authors
-
Frederik Wilde
Freie Universität Berlin
-
Ryan Sweke
Freie Universität Berlin
-
Johannes Jakob Meyer
Freie Universität Berlin
-
Maria Schuld
University of KwaZulu-Natal
-
Paul K. Fährmann
Freie Universität Berlin
-
Barthélémy Meynard-Piganeau
Ecole Polytechnique
-
Jens Eisert
Dahlem Center for Complex Quantum Systems, Free University of Berlin, Freie Universität Berlin