Error-driven Input Modulation: Solving the Credit Assignment Problem without a Backward Pass
ORAL · Invited
Abstract
Artificial neural networks customarily rely on the backpropagation scheme, where the weights are updated based on the error-function gradients and are propagated from the output to the input layer. Although this approach has proven effective in a wide domain of applications, it lacks a biological rationale in many regards, including the weight symmetry problem, the learning dependence on non-local signals, the freezing of neural activity during the error propagation, and the update locking problem. Alternative training schemes, such as sign symmetry, feedback alignment, and direct feedback alignment, have been introduced, but invariably rely on a backward pass that hinders the possibility of solving all the issues simultaneously. Here, we propose to replace the backward pass with a second forward pass in which the input signal is modulated based on the network's error. We show that this novel learning rule comprehensively addresses all the above-mentioned issues and applies to both fully connected and convolutional models on MNIST, CIFAR-10, and CIFAR-100. Overall, our work is an important step towards the incorporation of biological principles into machine learning.
–
Publication: G. Dellaferrera, G. Kreiman, Error-driven Input Modulation: Solving the Credit Assignment Problem without a Backward Pass, Manuscript in preparation
Presenters
-
Giorgia Dellaferrera
Harvard Medical School and Boston Children's Hospital
Authors
-
Giorgia Dellaferrera
Harvard Medical School and Boston Children's Hospital
-
Gabriel Kreiman
Harvard Medical School and Boston Children's Hospital