APS Logo

MADEM: Energy-efficient training of Deep neural networks using Memristor arrays

ORAL

Abstract

Energy-efficient and biologically plausible machine learning to train artificial deep neural networks is presented using hardware-algorithm co-optimization. Despite the success of backpropagation that computes the error gradient very efficiently through just two propagations (forward and backward) thanks to chain-rule, backpropagation has several shortcomings such as biological implausibility (e.g., nonlocality and symmetric synaptic weights). Also, the backpropagation-based gradient computation necessitates high precision calculations of neuron activities at least with 16-bit precision synaptic weights that necessitates high precision digital computing and energy-consuming. By using a novel learning algorithm compatible with analog in-memory-computing provided by memristor array, we demonstrate efficient training of deep neural networks, which is not only biologically plausible (local update rule), but also energy-efficient (5 orders of magnitude smaller), and faster (36 smaller latency).

Publication: S.Yi, J.Kendall, R.S.Williams, S.Kumar, "MADEM: Activity-Difference Training of Deep Neural Networks using Memristors" Nature Electronics [Accepted on Aug. 30th]

Presenters

  • Suin Yi

    Texas A&M University

Authors

  • Suin Yi

    Texas A&M University

  • Suhas Kumar

    Stanford University, Sandia National Laboratories, Sandia National Laboratories, CA, Sandia National Labs