APS Logo

Asymptotic Dynamics of Alternating Minimization for Bilinear Regression

POSTER

Abstract

Alternating minimization is a widely used algorithm for multivariable optimization, where one optimizes the objective function with respect to a subset of variables while keeping the rest fixed, and then iteratively repeating the process by altering the subset of variables under optimization. With its ability to exploit certain grouping properties in the optimization variables, alternating minimization has been used as a workforce for machine learning algorithms such as expectation-maximization and matrix estimation. However, the properties of the algorithm, such as its speed of convergence is not well known, especially since the objective function is non-convex in most non-trivial cases.

Given this background, this study investigates the asymptotic dynamics of alternating minimization applied to optimize a bilinear non-convex function with normally distributed covariates. This is achieved by employing the replica method to a multi-temperature glassy system which unfolds the algorithm's time evolution. Our results show that the dynamics can be described effectively by a two-dimensional discrete stochastic process, where each step depends on all previous time steps, revealing the structure of the memory dependence in the evolution of alternating minimization. The theoretical framework developed in this work can be applied to the analysis of various iterative algorithms, extending beyond the scope of alternating minimization.

Publication: K. Okajima and T. Takahashi. Asymptotic Dynamics of Alternating Minimization for Bilinear Regression (2024). arXiv preprint, 2402.04751.

Presenters

  • Koki Okajima

    Department of Physics, The University of Tokyo

Authors

  • Koki Okajima

    Department of Physics, The University of Tokyo

  • Takashi Takahashi

    Institute for Physics of Intelligence, The University of Tokyo