APS Logo

Training elastic neural networks with the Hamiltonian Monte Carlo sampling algorithm

ORAL

Abstract

Because of their low damping and highly non-linear characteristics, artificial neural networks (ANNs) made of nonlinear elastic resonators are promising candidates for low-power computing, as illustrated by recent demonstrations of passive speech recognition. However, designing information-processing elastic structures is a hard optimization problem: While the training of software-based ANNs can be facilitated by increasing the network size (converting local minima into saddle points), and by choosing activation functions with beneficial properties, there are usually hard limits on the size and activation functions in physically-implemented neural networks. Here we train resource-constrained elastic ANNs by applying the Hamiltonian Monte Carlo method, a variant of the Metropolis-Hastings algorithm used in statistical physics to sample probability distributions presenting a large number of local minima. While our work focuses on computers consisting of physical elastic resonators, our conclusions can be applied to general low power/resource constrained machine learning.

Publication: This work is a follow-up of: Dubcek, T., Moreno-Garcia, D., Haag, T., Thomsen, H. R., Becker, T. S., Bärlocher, C., ... & Serra-Garcia, M. (2021). Binary classification of spoken words with passive elastic metastructures. arXiv preprint arXiv:2111.08503.

Presenters

  • Théophile Louvet

    AMOLF

Authors

  • Théophile Louvet

    AMOLF

  • Vincent Maillou

    AMOLF

  • Finn T Bohte

    AMOLF

  • Lars Gebraad

    ETH

  • Marc Serra-Garcia

    AMOLF, AMOLF Amsterdam