APS Logo

Koopman operator learning for accelerating quantum optimization and machine learning

ORAL

Abstract

Finding efficient optimization methods is crucial for quantum optimization and machine learning on near-term quantum computers. It is costly to obtain gradients on quantum computers in comparison to classical computers, since the sample complexity scales linearly with the number of parameters and measurements. In this paper, we connect the natural gradient method in quantum optimization with Koopman operator theory, which is a powerful framework for predicting nonlinear dynamics. We propose a data-driven Koopman operator learning approach for accelerating quantum optimization and machine learning. To predict parameter updates on quantum computers, we develop new methods including the sliding window dynamic mode decomposition (DMD) and the neural-network-based DMD. We apply our methods both on simulations and real quantum hardware. We demonstrate successful acceleration of gradient optimization on the variational quantum eigensolver, including the quantum Ising model and the quantum Heisenberg model, as well as quantum machine learning applications.

Publication: In preparation.

Presenters

  • Di Luo

    Massachusetts Institute of Technology

Authors

  • Di Luo

    Massachusetts Institute of Technology

  • Jiayu Shen

    University of Illinois, Urbana-Champaign

  • Rumen Dangovski

    Massachusetts Institute of Technology

  • Marin Soljacic

    Massachusetts Institute of Technology