APS Logo

Designing networks to accurately learn 2D turbulence closures

POSTER

Abstract

Scientifically meaningful deployment of machine-learned subgrid closures in large-eddy simulations (LES) requires learned closures to be more accurate or faster to compute than existing closure models. Here we present a systematic study of the accuracy of neural LES closures for forced 2D turbulence as a function of the network architecture and hyperparameters. We examine statistically steady flows where we can control the location of the filtering scale with respect to the stationary spectrum, and include a range of architectures that allow us to distinguish the effects of nonlocality and finite-differencing errors in the closure accuracy. We consider fully-connected, convolutional, and u-net network architectures trained on filtered snapshots from highly resolved direct numerical simulations (DNS). We vary the breadth and depth of the networks as well as the selected input variables and cost functions used during training. We examine how these choices impact the accuracy of the learned closures in predicting true subgrid stresses from DNS, and how they affect the statistics of new coarse forward models / LES using the learned closures.

Authors

  • Keaton Burns

    MIT, Massachusetts Institute of Technology, Massachusetts Institute of Technology, Flatiron Institute

  • Ronan Legin

    University of Montreal & McGill University

  • Adrian Liu

    McGill University

  • Laurence Perreault-Levasseur

    Mila, University of Montreal, Flatiron Institute

  • Yashar Hezaveh

    University of Montreal, Flatiron Institute

  • Siamak Ravanbakhsh

    Mila, McGill University

  • Gregory Wagner

    Massachusetts Institute of Technology