Reinforcement Learning for Ramp-down Scenario Design and Active Disruption Avoidance
ORAL
Abstract
Plasma disruptions pose a major threat to burning plasma devices. Thus, it is desirable to develop emergency shutdown scenarios that minimize ramp-down time while avoiding disruptions and adapt to the real-time conditions of the plasma. Prior works performed trajectory optimization on the transport solver RAPTOR to minimize ramp-down time while avoiding disruptive limits, but such an approach is not immediately amenable to allowing the plasma control system (PCS) to adapt to new real-time conditions. In this work, we adopt a reinforcement learning approach and train a control policy that outputs current ramp rate, auxiliary heating, and fueling commands on a POPCON-like (Plasma OPerational CONtours) model. Given plasma state variables, this control policy tries to minimize the ramp-down time while avoiding disruptive limits. This approach may be useful for active disruption avoidance demonstrations on existing machines, and for assisting offline scenario design for burning plasma experiments. We demonstrate this approach for offline scenario design by inputting control trajectories generated by the control policy into a RAPTOR simulation of the SPARC primary reference discharge (PRD) to arrive at a candidate fast ramp-down scenario that avoids disruptive limits.
–
Presenters
-
Allen Wang
Massachusetts Institute of Technology
Authors
-
Allen Wang
Massachusetts Institute of Technology
-
Darren T Garnier
Massachusetts Institute of Technology, MIT Plasma Science and Fusion Center