APS Logo

Interpretable Deep Learning for Computational Fluid Dynamics

ORAL

Abstract

Can deep learning or symbolic regression supplement traditional simulators in fluid dynamics? How well do such models generalize outside of the dataset they learn from, and how well do they preserve statistical properties of the simulated fluid? In this talk, I will present some key observations from our recent research in this area, which aims to answer these questions. I will highlight our new method: "Disentangled Sparsity Networks," which allow one to interpret the internals of a neural network trained on fluids simulation. Not only does this give us a way of interrogating how the deep learning model is making predictions, but it also allows one to replace the learned model with a symbolic expression and embed that model inside a traditional solver. We show that this technique can improve the applicability of symbolic regression to high-dimensional datasets, such as those in fluid dynamics, without imposing priors on the recovered symbolic equation.

Publication: https://astroautomata.com/data/sjnn_paper.pdf<br>https://simdl.github.io/files/26.pdf<br>https://arxiv.org/abs/2006.11287

Presenters

  • Miles Cranmer

    Princeton University/DeepMind

Authors

  • Miles Cranmer

    Princeton University/DeepMind

  • Can Cui

    Flatiron Institute

  • Drummond Fielding

    Flatiron Institute

  • Alvaro Sanchez-Gonzalez

    DeepMind

  • Kimberly Stachenfeld

    DeepMind

  • Tobias Pfaff

    DeepMind

  • Jonathan Godwin

    DeepMind

  • Dmitrii Kochkov

    Google Research

  • Peter Battaglia

    DeepMind

  • Shirley Ho

    Flatiron Institute

  • David N Spergel

    Princeton University