Big data paradox in the global QCD analysis of proton structure
ORAL
Abstract
The big data paradox formulated by Xiao-Li Meng states that the more the data, the surer we fool ourselves. Precision physics programs at the high-luminosity Large Hadron Collider and Electron Ion Collider rely on large data sets to obtain tight constraints on parton distribution functions (PDFs) used for a variety of theoretical predictions. In the context of these programs, the big data paradox suggests that more experimental data do not automatically raise the accuracy of PDFs -- close attention to the data quality is as essential. On the example of the recent global analyses of proton PDFs by the CTEQ-TEA and other groups, we discuss how these issues affect PDF uncertainties in the key LHC processes.
–
Publication: Big data paradox in the global QCD analysis of proton structure, by Aurore Courtoy, Joey Huston, Pavel Nadolsky, Keping Xie, Mengshi Yan, and C.-P. Yuan, in preparation.
Presenters
-
Pavel M Nadolsky
Southern Methodist University
Authors
-
Pavel M Nadolsky
Southern Methodist University
-
Aurore Courtoy
UNAM, Instituto de Física, UNAM