APS Logo

Learning unitaries via gradient descent optimization

ORAL

Abstract

We study the learnability of unitary operations within the framework of the quantum alternating operator formalism using gradient descent method. Gradient descent algorithms are first order optimization methods which are of particular interest to the machine learning community due to their greater computational efficiency over second order techniques. However, the manifold of unitaries is non-convex in general and it is well known that gradient descent does not converge to the global minimum in such spaces. This motivates the question of how hard it is to learn a unitary with an alternating operator circuit using gradient descent. In this work, we find that gradient descent requires at least d^2 parameters in an alternating operator sequence to learn an arbitrary unitary in U(d) with a desired accuracy. The rate of convergence rapidly increases when gradient descent is performed in over-parameterized spaces greater than d^2. We heuristically argue the onset of a phase transition when gradient descent is performed on all d^2 parameters to learn a unitary. We present a greedy algorithm that can learn low-depth unitaries with much less than d^2 parameters in an alternating operator sequence. However, the success probability of the greedy algorithm is low.

Presenters

  • Reevu Maity

    Physics, University of Oxford

Authors

  • Bobak Kiani

    Mechanical Engineering, Massachusetts Institute of Technology

  • Reevu Maity

    Physics, University of Oxford

  • Seth Lloyd

    Massachusetts Institute of Technology, Massachusetts Institute of Technology MIT, MIT, Mechanical Engineering, Massachusetts Institute of Technology