In Computational Physics, Machine Learning Research is a Speculative Bubble
POSTER
Abstract
In recent years, there has been a massive growth in the number of papers published that apply machine learning (ML) to computational physics. Some highly cited papers have shown, allegedly, that machine learning can dramatically accelerate the solution of PDEs. Given the number of papers being published and the success of these papers, it would be reasonable to conclude that ML is becoming a standard tool in computational physics. This narrative -- that machine learning is causing rapid progress in computational physics -- is, as we will argue, wrong. Instead, we will argue that within computational physics, ML research is a speculative bubble. We believe that a failure of ML papers to consider state-of-the-art baselines is the primary cause of this speculative bubble and the evidence that it exists. The extraordinary versatility of machine learning has made it easy to forget that versatility does not imply utility. For machine learning to have utility, it must beat state-of-the-art baselines. As we will show, all of the most impressive results and most highly cited papers either use a baseline which is not state-of-the-art, or they fail to compare to a baseline at all. As a result, readers are misled into believing that machine learning has been much more successful than it really is.
Publication: [1] How to Design Stable Machine Learned PDE Solvers for Scalar Hyperbolic PDEs, submitted to NeurIPS 2022 (under review)<br>[2] Within Computational Physics, Machine Learning is a Speculative Bubble (planned)
Presenters
-
Nicholas B McGreivy
Princeton University
Authors
-
Nicholas B McGreivy
Princeton University
-
Ammar Hakim
Princeton Plasma Physics Laboratory