APS Logo

Exponentially tighter bounds on quantum error mitigation

ORAL

Abstract

Quantum error mitigation has been proposed as a means to combat both unwanted and unavoidable errors in near term quantum computing by classically post-processing outcomes of multiple quantum circuits. It does so in a fashion that requires no or few additional quantum resources, in contrast to fault-tolerant schemes that come along with heavy overheads. Error mitigation leads to noise reduction in small schemes of quantum computation. In this work, however, we identify strong limitations to the degree to which quantum noise can be effectively 'undone' for larger system sizes. We start out by presenting a formal framework that rigorously encapsulates large classes of meaningful and practically applied schemes for quantum error mitigation, including virtual distillation, Clifford data regression, zero-noise extrapolation and probabilistic error cancellation. Technically, we construct families of random circuits that are highly sensitive to noise, in the sense that even at log(log(n)) depth, a whisker beyond constant, quantum noise is seen to super-exponentially rapidly scramble their output into the maximally-mixed state. Our results exponentially tighten arguments that have been used in the literature for error mitigation, but they go beyond that: Our results also tighten distinguishability arguments used in kernel estimation for quantum machine learning, and the depth at which barren plateaus emerge or quantum advantage is lost for sampling from noisy random quantum circuits, implying that the scrambling due to noise kicks in at exponentially smaller depths than originally thought. Finally, our results also say that a noisy device must be applied exponentially-many times (in the number of gates in the light-cone of the observable) to estimate an expectation value of an observable. There are classical algorithms that exhibit the same scaling in complexity. While improvements in quantum hardware will push noise levels down, if error mitigation is used, ultimately this can only lead to an exponential time algorithm with a better exponent when compared with classical algorithms, putting paid to the hope of exponential quantum speedups in this setting.

Presenters

  • Yihui Quek

    Free University Berlin, Freie Universität Berlin

Authors

  • Sumeet Khatri

    Freie Universität Berlin

  • Yihui Quek

    Free University Berlin, Freie Universität Berlin

  • Johannes J Meyer

    Freie Universität Berlin

  • Daniel S França

    Ecole Normale Superieure de Lyon, Univ Lyon

  • Jens Eisert

    Freie Universität Berlin