APS Logo

Improving Neural Operators with Physics Informed Token Transformers

ORAL

Abstract



Solving Partial Differential Equations (PDEs) is the core of many fields of science and engineering. While classical approaches are often prohibitively slow, machine learning models often fail to incorporate complete system information. Over the past few years, neural operators have shown significant promise in PDE surrogate modeling. However, neural operators often do not incorporate physically relevant information into the learning process, namely the governing equations. To address this issue, we introduce PITT: Physics Informed Token Transformer, that uses a transformer-based architecture on top of existing, popular neural operators. The purpose of PITT is to incorporate the knowledge of physics by embedding partial differential equations (PDEs) into the learning process. PITT uses an equation tokenization method to learn an analytically-driven numerical update operator. By tokenizing PDEs and embedding partial derivatives, the transformer models become aware of the underlying knowledge behind physical processes. Combining neural operators with transformers offers a powerful and flexible method to incorporate physics knowledge into the neural operator learning process. To demonstrate this, PITT is tested on challenging 1D and 2D PDE neural operator prediction tasks. The results show that PITT outperforms popular neural operator models and has the ability to extract physically relevant information from governing equations.

Publication: "Physics Informed Token Transformer" available on ArXiv: https://arxiv.org/abs/2305.08757 and in submission at APL Machine Learning

Presenters

  • Cooper Lorsung

    Carnegie Mellon University

Authors

  • Cooper Lorsung

    Carnegie Mellon University

  • Zijie Li

    Carnegie Mellon University

  • Amir Barati Farimani

    Carnegie Mellon University