Coarse-graining with information theory and the relative entropy
COFFEE_KLATCH · Invited
Abstract
There remain many both fundamental and practical/methodological questions regarding how coarse-grained models should be developed. Are there theoretically intuitive and numerically robust strategies for turning small-scale all-atom simulations into coarse models suitable for large-scale modeling? How can we identify what atomic details are unnecessary and can be discarded? Are there systematic ways to detect emergent physics? Here we discuss a fundamentally new approach to this problem. We propose that a natural way of viewing the coarse-graining problem is in terms of information theory. A quantity called the relative entropy measures the information lost upon coarse graining and hence the (inverse) fitness of a particular coarse-grained model. Minimization of the relative entropy thus provides a sort-of universal variational principle for coarse-graining, and a way to ``automatically'' discover and generate coarse models of many systems. We show that this new approach enables us to develop very simple but surprisingly accurate models of water, hydrophobic interactions, self-assembling peptides, and proteins that enable new physical insights as well as simulations of large-scale interactions. We discuss both theoretical and numerical aspects of this approach, in particular highlighting a new coarse-graining algorithm that efficiently optimizes coarse-grained models with even thousands of free parameters. We also discuss how the relative entropy approach suggests novel strategies for predicting the errors of coarse models, for identifying relevant degrees of freedom to retain, and for understanding the relationships among other coarse-graining methodologies.
–
Authors
-
M. Scott Shell
University of California Santa Barbara