Adversarial Robustness of Quantum Machine Learning Models
ORAL
Abstract
State-of-the-art classical neural networks are observed to be vulnerable to small crafted adversarial perturbations. A more severe vulnerability has been noted for QML models classifying Haar-random pure states. This stems from the concentration of measure phenomenon, a property of the metric space when sampled probabilistically, and is independent of the classification protocol. In this paper, we focus on the adversarial robustness in classifying a subset of encoded states that are smoothly generated from a Gaussian latent space. We show that the vulnerability of this task is considerably weaker than that of classifying Haar-random pure states. Our analysis provides insights into the adversarial robustness of any quantum classifier in real-world classification tasks. In particular, we find only mildly polynomially decreasing potential robustness in the number of qubits, in contrast to the exponentially decreasing robustness when classifying Haar-random pure states.
–
Presenters
-
Haoran Liao
University of California, Berkeley
Authors
-
Haoran Liao
University of California, Berkeley
-
Ian Convy
University of California, Berkeley
-
William Huggins
University of California, Berkeley, Google LLC
-
Birgitta K Whaley
University of California, Berkeley, Chemistry, University of California, Berkeley