Filtering techniques for statistically under sampled turbulent data sets

ORAL

Abstract

Extracting averaged values from low sample size turbulent data sets can be challenging due to the natural variation in the variables caused by the turbulent flows. Filtering techniques (Leonard 1974, Germano 1992) can be applied to these data sets to better match the statistics found from the larger sample size data. The experimental data from the turbulent mixing tunnel (TMT) (Charonko, Prestridge 2017) provides an excellent data set for testing these techniques as the sample size is large, the samples are high resolution, and variable density effects are tested. Containing 10,000 samples per location, a subset of the TMT data is chosen to represent a low sample size data set. Filtering techniques are then applied to the sample set and the statistics are compared to that of the full data set. These techniques have application to validation of simulations in which the domain has a very large number of spatial grid points, limiting the amount of time resolved data.

Presenters

  • Austin Davis

    Univ of Victoria, Los Alamos Natl Lab

Authors

  • Austin Davis

    Univ of Victoria, Los Alamos Natl Lab

  • John James Charonko

    Los Alamos National Laboratory

  • Katherine P Prestridge

    Los Alamos National Laboratory, Los Alamos Natl Lab