APS Logo

Optimal learning despite a hundred distracting directions

ORAL

Abstract

Learning from incomplete data requires a notion of measure on parameter space, which is most explicit in the Bayesian framework as a prior distribution. We demonstrate here that ostensibly neutral choices like Jeffreys prior can in fact introduce enormous bias in typical high-dimensional models. Models found in science typically have an effective dimensionality of accessible behaviors much smaller than the number of microscopic parameters. Naively using the invariant volume element, which treats all of these parameters equally, strongly distorts the measure projected onto the sub-space of relevant parameters, due to variations in the local co-volume of irrelevant directions. The fact that this co-volume typically varies over many orders of magnitude is what introduces bias into predictions. We present results on principled choices of measure which avoid this issue, and lead to unbiased posteriors. These measures allow optimal learning, despite the presence of many paramters which cannot be fixed.

Presenters

  • Michael C Abbott

    Yale University

Authors

  • Michael C Abbott

    Yale University

  • Benjamin B Machta

    Yale University, Yale