Power of data in quantum machine learning
ORAL
Abstract
The use of quantum computing in machine learning has been an exciting prospect. At the crux of excitement is the potential for quantum computers to perform some computations exponentially faster than their classical counterparts. However, a machine learning task, where some data is provided, is different from a computational task. In this work, we show that some problems that are classically hard to compute can, in fact, be predicted easily with classical machines that learn from data. Using rigorous prediction error bounds as a foundation, we develop a methodology for assessing the potential for quantum advantage in learning tasks. We show rigorously how existing quantum models can result in significantly inferior prediction performance compared to classical models, even for datasets generated by quantum evolution. To circumvent the setbacks, we propose an improvement by projecting quantum states to classical space. The projected quantum model provides a simple and rigorous quantum speed-up in the fault-tolerant regime. For more near-term quantum models, the projected versions demonstrate large prediction advantages over standard classical models on engineered data sets in one of the largest numerical tests for gate-based quantum machine learning to date, up to 30 qubits.
–
Presenters
-
Hsin-Yuan Huang
Caltech, Caltech / Google
Authors
-
Hsin-Yuan Huang
Caltech, Caltech / Google
-
Michael Broughton
Google AI Quantum, Google
-
Masoud Mohseni
Google AI, Google, Google AI Quantum, Google Quantum AI
-
Ryan Babbush
Google, Google LLC
-
Jarrod McClean
Google, Google LLC