APS Logo

On Explaining the Surprising Success of Reservoir Computing Forecaster of Chaos? The Universal Machine Learning Dynamical System with Contrasts to VAR and DMD

ORAL · Invited

Abstract

Machine learning has become a widely popular and successful paradigm, including for data-driven science. A major application problem is forecasting complex dynamical systems. Artificial neural networks (ANN) have evolved as a clear leading approach, and recurrent neural networks (RNN) are considered to be especially well suited for. In this setting, the echo state networks (ESN) or reservoir computer (RC) have emerged for simplicity and computational advantages. Instead of a fully trained network, an RC trains only read-out weights. However, why and how an RC works at all, despite randomly selected weights is perhaps a surprise. To this end, we analyzes a simplified RC, where the internal activation function is an identity function. We explicitly connect the RC with linear activation and linear read-out to well developed time-series literature on vector autoregressive averages (VAR) that includes theorems on representability through the WOLD theorem, which already perform reasonably for short term forecasts. In the case of a linear activation and now popular quadratic read-out RC, we explicitly connect to a nonlinear VAR (NVAR), which performs quite well. Further, we associate this paradigm to the now widely popular dynamic mode decomposition (DMD), and thus these three are in a sense different faces of the same concept. We illustrate our observations in terms of popular benchmark examples including Mackey-Glass differential delay equations and the Lorenz63 system.

Presenters

  • Erik Bollt

    Clarkson University

Authors

  • Erik Bollt

    Clarkson University