APS Logo

Novel Data Encoding Method for Quantum Machine Learning

ORAL

Abstract

Quantum machine learning (QML) has the potential to provide computational speedup over classical methods. One of the potential quantum advantages comes from the ability to encode classical data in an exponentially compact form, such as amplitude encoding that encodes N values with only logN qubits. However, the data encoding process is costly, requiring either O(logN) qubits with O(N) gate depth, or O(N) ancilla qubits with O(logN) gate depth.

For practical machine learning tasks, we typically use the batching method that processes m data points at once when updating the parameter values. Classically the memory and learning computation requirements scale linearly with m. A naive application of the previous QML encoding methods also has multiplication factor m (i.e., O(N) number of ancillas with O(mlogN) gate depth). This work proposes a novel circuit compilation method that achieves O(m) + O((logN)^2) gate depth with the same amount of ancilla qubits. Specifically, we provide the encoding algorithm, prove the depth upper bounds, and propose several other QML applications based on this novel encoding method.

Presenters

  • Kaiwen Gui

    University of Chicago

Authors

  • Kaiwen Gui

    University of Chicago

  • Alexander M Dalzell

    AWS Center for Quantum Computing

  • Alessandro Achille

    AWS AI Labs

  • Martin Suchara

    Amazon Web Services, Amazon Web Service

  • Frederic T Chong

    University of Chicago, Department of Computer Science, University of Chicago, ColdQuanta Inc.