Layerwise learning for quantum neural networks
POSTER
Abstract
We introduce a layerwise learning strategy for parameterized quantum circuits. The circuit depth is incrementally grown during optimization, starting with a shallow circuit and adding depth until the required loss is attained. By training only varying subsets of the circuit’s parameters, we keep a fixed number of training parameters while increasing circuit depth until it is sufficient to represent the data. We then show that this approach avoids the problem of saddle points, or barren plateaus, of the error surface to a large extent due to the low depth of circuits, low number of parameters, and larger magnitude of gradients compared to training the full circuit. These properties make our algorithm ideal for execution on noisy intermediate-scale quantum (NISQ) devices. We demonstrate our approach on an image-classification task on handwritten digits, and show that the number of trained parameters can be decreased substantially while keeping gradient magnitudes larger than those of quantum circuits of the same size trained on a fixed architecture.
Presenters
-
Masoud Mohseni
Google AI, Google Inc., Google Inc, Google Research, Google Quantum AI Laboratory
Authors
-
Andrea Skolik
Ludwig Maximilian University of Munich
-
Jarrod McClean
Google Inc., Google Research
-
Masoud Mohseni
Google AI, Google Inc., Google Inc, Google Research, Google Quantum AI Laboratory
-
Patrick van der Smagt
Machine Learning Research Lab, Volkswagen Group
-
Martin Leib
Data Lab, Volkswagen Group