APS Logo

Subtleties in the trainability of quantum machine learning models

ORAL

Abstract

Quantum Machine Learning (QML) aims to achieve a speedup over traditional machine learning for data analysis. However, its success hinges on efficiently training the parameters in models like quantum neural networks, and the field is still lacking theoretical scaling results for trainability. Some trainability results have been proven for a closely related field called Variational Quantum Algorithms (VQAs). While both fields involve training a parametrized quantum circuit, there are crucial differences that make the results for one setting not readily applicable to the other. In this work we bridge the two frameworks and show that gradient scaling results for VQAs can also be applied to study the gradient scaling of QML models, indicating that detrimental features for VQA trainability can lead to issues such as barren plateaus in QML. Consequently, our work has implications for several QML proposals in the literature. In addition, we provide evidence that QML models exhibit further trainability issues, arising from a dataset - referred here as dataset-induced barren plateaus. These results are most relevant when dealing with classical data, as the choice of embedding scheme can greatly affect the gradient scaling.

Presenters

  • Supanut Thanasilp

    Natl Univ of Singapore

Authors

  • Supanut Thanasilp

    Natl Univ of Singapore

  • Samson Wang

    Imperial College London

  • Nhat A Nghiem

    State University of New York at Stony Brook, Stony Brook University (SUNY)

  • Patrick J Coles

    Los Alamos National Laboratory

  • Marco Cerezo

    Los Alamos National Laboratory