Control-oriented model learning with a recurrent neural network
ORAL
Abstract
In the recent years, model learning has been boosted by the increased computational power and the availability of large amount of high-quality data. We here focus on the approximation of the dynamics of complex systems using Recurrent Neural Networks (RNN) for control purposes. RNNs are able to accurately approximate the attractor of chaotic systems by solely observing their time evolution [Pathak et al. 2018] and to predict the state of the system over long time-horizons[Vlachas et al. 2018]. However, it is crucial to ensure generalizability of the learned model to data unseen in the training set (overfitting issue). In this work, we consider the Kuramoto-Sivashinsky equation in the chaotic regime and show that it is necessary to train with more than one trajectory emanating from each of the equilibrium solutions of the chaotic attractor to learn a generalizable model; in particular, we combine the Long Short Time Memory architecture for the time prediction and the convolutional process for the spatial embedding. The quality of the proxy with respect of the actual dynamics will also be discussed in terms of Lyapunov exponents and distance between trajectories in the phase space.
–
Presenters
-
Michele Alessandro Bucci
LIMSI-CNRS
Authors
-
Michele Alessandro Bucci
LIMSI-CNRS
-
Onofrio Semeraro
LIMSI-CNRS
-
Alexandre Allauzen
LIMSI-CNRS
-
Laurent Cordier
Univ de Poitiers, Institut PPRIME, CNRS - Université de Poitiers - ENSMA
-
Guillaume Wisniewski
LIMSI-CNRS
-
Lionel Mathelin
LIMSI-CNRS