Self-Supervised Identification of Coherent Modes in Tokamaks

POSTER

Abstract

Fusion diagnostics provide abundant information on the state of a tokamak, which can be used for studying physics or for control. Critical information can be found in resonant modes such as magnetic modes. We rely on experts to identify these modes and their consequences, or by exploiting geometric properties of the tokamak for generating structures. Recent advances in computer vision and acoustic modelling have shown the potential for this to be self-learned. We apply these principles directly on individual diagnostics to achieve preliminary results in automatic mode detection and learning.

Two methods will be explored using Mirnov coil and CO2 interferometer diagnostics. First, the eigenvalues of visual similarities will be taken with deep self-supervised feature extraction [1]. Second, an internally represented codebook of modes will be used for reconstruction using a foundation model autoencoder [2]. These methods seek to encode semantic modal information, create an automatic mode detection scheme for shot analysis, and advance knowledge about new types of modes or interactions between these modes.

[1] K. E. J. Olofsson et al., Array magnetics modal analysis for the DIII-D tokamak based on localized time-series modelling, Plasma Phys. Control. Fusion 56, 095012 (2014).

[2] A. Jalalvand, M. Curie, S. Kim, P. Steiner, J. Seo, Q. Hu, A. O. Nelson, and E. Kolemen, Diag2Diag: Multimodal Super-Resolution Diagnostics for Physics Discovery with Application to Fusion, arXiv:2405.05908.

Presenters

  • NATHANIEL CHEN

    Princeton University

Authors

  • NATHANIEL CHEN

    Princeton University

  • Peter Steiner

    Princeton University

  • Azarakhsh Jalalvand

    Princeton University

  • Egemen Kolemen

    Princeton University

  • Kouroche Bouchiat

    Princeton University