Selected Configuration Interaction using Reinforcement Learning
ORAL
Abstract
Configuration interaction (CI) is a widely used method for solving quantum many-body problems. The challenge of CI is to solve a large sparse eigenvalue problem, which grows rapidly as the number of particles and the size of the Slater determinant basis increases. For many problems, the low-lying and ground state eigenfunctions exhibit localization, i.e., a small set of most important basis functions contains most of the information of the system. One approach, often referred to as the selected CI method, selects these important functions to construct a finite dimensional accurate approximation of the many-body Hamiltonian. Typical selected CI algorithms use physics intuitions to select a few functions as the starting point and then use perturbation theory to select more functions, but they are not globally optimal. In this work, we develop a reinforcement learning (RL) algorithm approach. The RL algorithm starts from a set of many-body functions. Each action removes some functions and adds new functions into that set with a designated reward. We obtain an optimal set of selections through episodes of iterations. We calculate the error bounds and test the performance of our algorithm against several other selected CI algorithms.
–
Authors
-
Lihao Yan
University of Notre Dame
-
Li Zhou
Fudan University
-
Mark A. Caprio
University of Notre Dame
-
Weiguo Gao
Fudan University
-
Chao Yang
Lawrence Berkeley National Laboratory