Generalized aliasing in large data-sets: Lessons from pseudo-spectral methods
ORAL
Abstract
Effects of aliasing are commonly seen in signals processing and direct numerical simulations of fluid dynamics, particularly for pseudo-spectral methods. These aliasing effects have been well studied and are accounted for in most high level simulations. The advent of 'artificial intelligence' and massive data sets in recent years has led to the breakdown of the traidtional bias-vairance tradeoff commonly used in statistics to explain the so-called 'sweet-spot' necessary to achieve the best model fit to known data. This tradeoff fails to explain the success of deep neural networks and other recent advances where the 'sweet-spot' in interpolation space yields far more erros than an over-parameterized fit does.
We show that this breakdown in interpolation methods can be adequately explained by a generalization of the concept of aliasing which is well understood in the signals processing and computational fluid dynamics community. This leads to a new label-independent decomposition of the model error that provides insight into model selection and provides a justified explanation of the need for over-parameterization for large datasets.
We show that this breakdown in interpolation methods can be adequately explained by a generalization of the concept of aliasing which is well understood in the signals processing and computational fluid dynamics community. This leads to a new label-independent decomposition of the model error that provides insight into model selection and provides a justified explanation of the need for over-parameterization for large datasets.
–
Presenters
-
Jared P Whitehead
Brigham Young University
Authors
-
Jared P Whitehead
Brigham Young University
-
Gus L.W. Hart
Brigham Young University
-
Mark K Transtrum
Brigham Young University
-
Tyler Jarvis
Brigham Young University