Assessing Systematic Discrepancies, Uncertainties and their Correlations in Experiments
ORAL · Invited
Abstract
Scientists are often faced with discrepant experimental data: that is experimental data that disagree systematically by more than their one-sigma bounds. If one tries to calibrate one’s model to these data or produce evaluated data from the experimental data, one is faced with the dilemma of deciding (a) what data to trust and (b) what uncertainty to assign on models calibrated to the experimental data.
Here, we try to explore techniques to answer these questions by detailed uncertainty quantification and using AI/ ML methods. First, we quantify if all uncertainties that should be reported for a particular dataset are actually accounted for in the reported experiment uncertainties. To this end, templates of expected measurement uncertainties are applied to analyze if all pertinent uncertainties were provided and then estimate stand-in values if that is not the case. This analysis step will help us create a consistent database and rule out the possibility that experimental discrepancies are caused by underestimated uncertainties. In a second step, we explore the physics root cause of remaining discrepancies between experiments using a Bayesian model that pinpoints the systematic discrepancies and then relates them to metadata features unique to each measurement. This information serves as clues regarding which experiments to trust and which to further investigate via simulations or subsequent experiments.
Finally, an effect called Peelle’s Pertinent Puzzle is explored where the mean of two experimental values is below their respective values and uncertainties. It will be discussed that experimental covariances need to be properly formulated to avoid this effect in fitting procedures relying on these covariances.
Here, we try to explore techniques to answer these questions by detailed uncertainty quantification and using AI/ ML methods. First, we quantify if all uncertainties that should be reported for a particular dataset are actually accounted for in the reported experiment uncertainties. To this end, templates of expected measurement uncertainties are applied to analyze if all pertinent uncertainties were provided and then estimate stand-in values if that is not the case. This analysis step will help us create a consistent database and rule out the possibility that experimental discrepancies are caused by underestimated uncertainties. In a second step, we explore the physics root cause of remaining discrepancies between experiments using a Bayesian model that pinpoints the systematic discrepancies and then relates them to metadata features unique to each measurement. This information serves as clues regarding which experiments to trust and which to further investigate via simulations or subsequent experiments.
Finally, an effect called Peelle’s Pertinent Puzzle is explored where the mean of two experimental values is below their respective values and uncertainties. It will be discussed that experimental covariances need to be properly formulated to avoid this effect in fitting procedures relying on these covariances.
–
Publication: Noah Walton et al., Computer Physics Communications 315 (2025) 109698
Presenters
-
Denise Neudecker
Los Alamos National Laboratory (LANL)
Authors
-
Denise Neudecker
Los Alamos National Laboratory (LANL)