Burn Control in ITER Using a Reinforcement Learning Approach

POSTER

Abstract

The highly nonlinear and coupled dynamics of burning plasmas in ITER will demand the active regulation of plasma temperature and density (burn control) [1]. Recently, reinforcement learning (RL) has been explored as an alternative to traditional model-based control synthesis for tackling plasma-control problems in tokamaks. Though guaranteeing stability and convergence can be challenging, RL-based control synthesis may offer the capability of handling higher complexity in the data-generating plasma-response model used for training. In this work, an RL-based burn controller is designed to track user-specified references for the plasma states, which can change over time. The reference-tracking controller is trained within a synthetic burning plasma environment using a model-free reinforcement learning algorithm. This environment is modeled by the nonlinear, zero-dimensional Control-Oriented Burning plAsma simuLaTor (COBALT), which captures the energy and density evolution in ITER. The burn controller is designed to choose optimal values for the external deuterium fueling, tritium fueling, ion heating, and electron heating produced by the actuators such that the discrepancy between the observed plasma state and the user-specified reference is minimized. The effectiveness of the proposed controller is demonstrated using nonlinear simulations based on ITER scenarios.

[1] V. Graber, E. Schuster, Nuclear Fusion 64 (2024) 086007 (15pp)

Presenters

  • Ian Ward

    Lehigh University

Authors

  • Ian Ward

    Lehigh University

  • Vincent R Graber

    Lehigh University

  • Nicholas J Rist

    Lehigh University

  • Sai T Paruchuri

    Lehigh University

  • Eugenio Schuster

    Lehigh University