Decentralized, Physics-Driven Learning
ORAL · Invited
Abstract
Artificial neural networks have a scalability problem. As tasks grow more complex and require bigger networks, training times skyrocket because information must flow sequentially through a Central Processing Unit (CPU) both in the forwards direction and during backpropagation (training). Recent theoretical frameworks have proposed entirely distributed, physics-driven learning systems that bypass this processor bottleneck entirely, instead harnessing physics to perform the forward ‘computations’ and using local rules for backpropagation. However, physical realization of these proposed equations is a significant challenge. Here we demonstrate the successful laboratory realization of decentralized, physics-driven learning in an electronic system [1]. When exposed to training data, a network of self-adjusting variable resistors trains itself autonomously, simultaneously updating every edge of the network to perform a variety of tasks, including regression and classification. Because learning is distributed, increasing the size of this system does not slow it down; increasing the number of edges and clock speed of this system by a factor of 10 each will already outpace a comparable simulation. We estimate both can be increased by a factor of 106 with modern electronic manufacturing, making it a potentially useful system for machine learning applications.
[1] S. Dillavou et. al. ArXiv (2021) https://arxiv.org/abs/2108.00275
–
Publication: S Dillavou, M Stern, AJ Liu, DJ Durian. Demonstration of Decentralized Physics-Driven Learning. ArXiv (2021) https://arxiv.org/abs/2108.00275
Presenters
-
Sam J Dillavou
University of Pennsylvania
Authors
-
Sam J Dillavou
University of Pennsylvania
-
Menachem Stern
University of Pennsylvania
-
Marc Z Miskin
University of Pennsylvania
-
Andrea J Liu
University of Pennsylvania
-
Douglas J Durian
University of Pennsylvania