Disruption prediction: from shallow to deep learning and interpretability techniques

ORAL

Abstract

Disruption prediction has the same appeal as solving the autonomous vehicle challenge: its solution could be a drastic leap forward. So far, machine learning-based disruption predictors have shown different and device-dependent performances [Rea PPCF 2018]. Nevertheless, the realization of a universal predictor, portable across existing and future tokamaks, working without extensive empirical tuning, is extremely important. Such a predictor needs to warn of an impending disruption hundreds of milliseconds in advance to inform the plasma control system (PCS) of the offending feature(s) and thus steer the plasma away from the disruptive operational space. We have recently embedded a Random Forest model in the real-time PCS of a fusion device; a baseline disruption predictor was obtained that, by running continuously over four months of operations, is showing encouraging results. We will discuss this shallow learning approach as well as the description of deeper architectures. To better understand disruption dynamics and optimize strategies for disruption avoidance, explainable predictions need to be provided: we will indeed present several interpretability strategies and their implications for disruption prediction.

Presenters

  • Cristina Rea

    Massachusetts Inst of Tech-MIT, Massachusetts Inst of Tech, MIT PSFC, Massachusetts Institute of Technology

Authors

  • Cristina Rea

    Massachusetts Inst of Tech-MIT, Massachusetts Inst of Tech, MIT PSFC, Massachusetts Institute of Technology

  • Robert S Granetz

    Massachusetts Inst of Tech-MIT, Massachusetts Inst of Tech, MIT Plasma Science and Fusion Center, MIT PSFC

  • Kevin J Montes

    Massachusetts Inst of Tech-MIT, MIT PSFC

  • Roy Alexander Tinguely

    MIT PSFC, Massachusetts Inst of Tech-MIT