Generative Learning of Continuous Data by Tensor Networks
ORAL
Abstract
Beyond their origin in modeling many-body quantum systems, tensor networks have emerged as a promising class of models for solving machine learning problems, notably in unsupervised generative learning. While possessing many desirable features arising from their quantum-inspired nature, tensor network generative models have previously been restricted to binary or categorical data, limiting their usefulness in real-world modeling tasks. We overcome this by introducing a new family of tensor network generative models for continuous data, which are capable of learning from distributions containing continuous random variables. We develop our method in the setting of matrix product states, first deriving a universality expressivity theorem proving the ability of this model family to approximate any reasonably smooth probability density function with arbitrary precision. We then benchmark the performance of this model on synthetic and real-world datasets, finding that the model learns and generalizes well on distributions of continuous and discrete variables. We develop methods for modeling different data domains, and introduce a trainable compression layer which we show increases model performance for a given amount of memory and computational resources.
–
Presenters
-
Alexander H Meiburg
University of California, Santa Barbara
Authors
-
Alexander H Meiburg
University of California, Santa Barbara
-
Jacob E Miller
Zapata Computing, Zapata Computing Inc.
-
Jing Chen
Simons Foundation
-
Alejandro Perdomo-Ortiz
Zapata Computing Inc, Zapata Computing