APS Logo

Learning out of equilibrium in physical systems

ORAL

Abstract

Physical networks can adapt to external stimuli and learn to perform desired tasks by exploiting local 'learning rules' that govern learning degrees of freedom (e.g. edge resistances in resistor networks). So far, it has been assumed that such learning machines can learn successfully only if the learning degrees of freedom evolve slowly compared to their physical dynamics, such that the physical degrees of freedom (e.g. currents on edges) are effectively always equilibrated. However, this assumption slows down learning considerably, rendering machine learning algorithms based on local rules non-competitive with standard algorithms. Inspired by natural learning systems, such as certain neuronal circuits, which learn on timescales similar to their relaxation, we relax the assumption of slow learning, showing in experiments and simulations that electric resistor networks can learn allosteric tasks up to a critical learning rate without loss in accuracy. Going beyond the critical learning rate, we find non-equilibrium learning oscillations but the network can still learn allosteric tasks at much greater rates. These oscillations can be suppressed when the network passes by flat solutions to the learning task. Our results demonstrate that learning is robust even far from equilibrium.

Presenters

  • Menachem Stern

    University of Pennsylvania

Authors

  • Menachem Stern

    University of Pennsylvania

  • Sam J Dillavou

    University of Pennsylvania

  • Marc Z Miskin

    University of Pennsylvania

  • Douglas J Durian

    University of Pennsylvania

  • Andrea J Liu

    University of Pennsylvania