APS Logo

Interpretability Inspires: How explainable AI helps improve Top Tagging

ORAL

Abstract

Using state-of-the-art methods of explainable AI (XAI), we explore interpretability of deep neural network (DNN) models designed for identifying jets coming from top quark decay in the high energy proton-proton collisions at the Large Hadron Collider (LHC). We study the relative importance of low-level and high-level input features, variation of feature importance varies across different XAI metrics, and how latent space representations encode information and learn to correlate with physical quantities. We visualize the activity of hidden layers as to understand how DNNs relay information across the layers and how this understanding can help us to make such models significantly simpler by allowing effective model reoptimization and hyperparameter tuning.

Presenters

  • Avik Roy

    University of Illinois at Urbana-Champaign

Authors

  • Avik Roy

    University of Illinois at Urbana-Champaign

  • Mark S Neubauer

    University of Illinois at Urbana-Champaign

  • Ayush Khot

    University of Illinois at Urbana-Champaign