Statistical Benchmarking Theory of Classical and Quantum Optimization Algorithms
ORAL
Abstract
The rapid scaling of quantum annealing (QA) devices over the past decade, along with the more recent advance of quantum-inspired analog optimization hardware, has led to the development of benchmarking and analysis techniques to quantify potential "quantum advantage" over classical optimization algorithms. However, rigorous benchmarking techniques have mainly emerged from literature convention and practice focused on comparing the simple QA algorithm with Markov chain Monte Carlo (MCMC), and the scope of evidence-based algorithmic benchmarks must grow to encompass hybrid classical-quantum algorithms, problem embedding, variational quantum algorithms, and computational resource costs based on energy consumption rather than time. In this work, we first define algorithm benchmarking metrics, including the time-to-solution metric, from a problem-agnostic, probabilistic foundation that enables multiple avenues for estimating the hardness of individual problem instances under different types of algorithms. Next, we discuss how statistically significant differences in algorithm performance are conventionally established through bootstrapping, or Bayesian bootstrapping, of repetition samples of each algorithm. Finally, we focus on some of the challenges of fairly assessing hybrid algorithms and propose a "black-box replacement" approach to measure whether hybrid algorithms gain by using a quantum optimizer--in a statistically significant sense--over classical optimizers and that they are not excessively reliant on sophisticated pre-processing or decomposition strategies.
–
Presenters
-
Humberto Munoz-Bauza
NASA Ames Research Center
Authors
-
Humberto Munoz-Bauza
NASA Ames Research Center
-
Gianni Mossi
NASA Ames Research Center
-
Salvatore Mandra
NASA Ames Research Center