Adaptively Switching Gradient Descent for Reliable PINN Training with Guarantees
ORAL
Abstract
Physics-Informed Neural Networks (PINNs) have emerged as a powerful tool for integrating physics-based constraints and data to address forward and inverse problems in machine learning. Despite their potential, the implementation of PINNs is hampered by several challenges, including issues related to convergence, stability, and the design of neural networks and loss functions. In this paper, we introduce a novel training scheme that addresses these challenges by framing the training process as a constrained optimization problem. Utilizing a quadratic program (QP)-based gradient descent law, our approach simplifies the design of loss functions and guarantees stability properties of training to optimal neural network parameters. This methodology enables adaptively shifting, over the course of training, between various losses: data-based losses and partial differential equation (PDE) residual losses. We demonstrate these methods on problems in beam physics and other problems in physics involving PDEs of several variables.
–
Presenters
-
Alan Williams
Los Alamos National Laboratory
Authors
-
Alan Williams
Los Alamos National Laboratory
-
Mahindra Rautela
Los Alamos National Laboratory
-
Christopher Leon
Los Alamos National Laboratory
-
Alexander Scheinker
Los Alamos National Laboratory (LANL)