APS Logo

Towards a Foundation Model for Fusion: Multimodal Representation Learning of Plasma State and Control

POSTER

Abstract

Understanding and controlling plasma behavior in fusion devices requires integrating information across multiple diagnostics and actuator systems, each capturing different aspects of the complex plasma state. Traditional approaches often analyze these modalities in isolation, missing important cross-modal dependencies that can provide deeper insight into plasma dynamics and control. We propose a self-supervised learning approach based on a large-scale masked autoencoder architecture that is designed to uncover a unified representation of plasma state and control inputs across multiple diagnostic modalities.

The learned representation can be used to identify subtle patterns across modalities that may be invisible to individual diagnostic analysis, potentially revealing new physics insights about plasma behavior under different control strategies. The model’s capacity to reconstruct masked diagnostic data also demonstrates its potential for robust operation during diagnostic failure and for achieving enhanced resolution beyond the limits of single diagnostic models.[1] Furthermore, the intermediate feature space provides a new holistic representation of the plasma state which can be leveraged for transfer learning of downstream tasks such as mode identification, instability prediction, scenario design and optimal control synthesis.
[1] Jalalvand, et al., 2024. “Multimodal Super-Resolution: Discovering hidden physics and its application to fusion plasmas”

Presenters

  • Kouroche Bouchiat

    Princeton University

Authors

  • Kouroche Bouchiat

    Princeton University

  • NATHANIEL CHEN

    Princeton University

  • Peter Steiner

    Princeton University

  • Azarakhsh Jalalvand

    Princeton University

  • Egemen Kolemen

    Princeton University