Unveiling the Physics Behind Deep Learning Models: A Path Towards Interpretable AI
ORAL
Abstract
Understanding the underlying physics, including partial differential equations (PDEs), governing spatial-temporal data is crucial for accurate predictions, comprehension of the physics driving the data, and potential control of physical systems. However, uncovering these governing equations poses significant challenges, given their inherent complexity. On the other hand, deep learning has demonstrated a remarkable ability to serve as a surrogate for intricate data sets, delivering high accuracy in predictions. This raises a compelling question: could the mechanisms within deep learning models provide insights into the governing equations behind the data they effectively represent?
Our ongoing study seeks to address this question by visualizing benchmark PDE models trained across various levels of complexity. Initial findings have revealed intricate patterns within the model weights and biases, suggesting a complex relationship with the underlying PDEs. Pattern recognition is applied to extract the patterns and importance of the model. To further explore this relationship, we are training an inverse symbolic model aimed at establishing correlations between the deep learning model structures and the represented PDEs. We aim to present these early insights and discuss the complexities and future directions of this promising pathway towards interpretable AI.
Our ongoing study seeks to address this question by visualizing benchmark PDE models trained across various levels of complexity. Initial findings have revealed intricate patterns within the model weights and biases, suggesting a complex relationship with the underlying PDEs. Pattern recognition is applied to extract the patterns and importance of the model. To further explore this relationship, we are training an inverse symbolic model aimed at establishing correlations between the deep learning model structures and the represented PDEs. We aim to present these early insights and discuss the complexities and future directions of this promising pathway towards interpretable AI.
–
Presenters
-
Ruo-Qian Wang
Rutgers University - New Brunswick, Rutgers, the State University of New Jersey
Authors
-
Ruo-Qian Wang
Rutgers University - New Brunswick, Rutgers, the State University of New Jersey