Zero-shot quantum state preparation with reinforcement learning
ORAL
Abstract
Quantum state preparation is an essential cornerstone of quantum information science and quantum algorithms. Notwithstanding worst-case hardness results, designing efficient and scalable methods for approximate state preparation on near-term quantum devices remains a significant challenge. In this work, we present a deep reinforcement learning approach to quantum state preparation that allows for the zero-shot preparation of any state at a fixed system size. We scale significantly beyond previous works by designing a novel reward function with provable guarantees. In our experiments on stabilizer states up to nine qubits, we achieve generalization to unseen states by training on less than $10^{-3}$\% of the state space. We prepare target states with varying degrees of entanglement content and obtain insight into the quantum dynamics generated by our trained agent. Benchmarking shows our model produces stabilizer circuits up to $60$\% shorter than existing algorithms, setting a new state of the art. To our knowledge, this is the first work to prepare arbitrary stabilizer states on more than two qubits without re-training.
–
Publication: K. N. Agaram, S. Midha, A. Müller, V. Garg, "Train once and generalize: Zero-shot quantum state preparation with RL", in review.
Presenters
-
Krishna N Agaram
Indian Institute of Technology Bombay
Authors
-
Krishna N Agaram
Indian Institute of Technology Bombay
-
Siddhant Midha
Princeton University
-
Adrian Müller
Aalto University
-
Vikas Garg
Aalto University