Trustworthy Machine Learning and Artificial Intelligence Frameworks for Scientists
ORAL · Invited
Abstract
Some scientists hesitate to use artificial intelligence (AI) and machine learning (ML) methods due to the lack of reproducibility, explainability, and transparency in these models; these qualities are collectively known as "trustworthiness". Trustworthy AI frameworks can help overcome this hesitancy by evaluating AI models beyond performance on a test dataset. Trustworthy AI frameworks for fields such as computer vision, natural language processing, and health care may include social responsibility aspects. In addition to these aspects, a Trusted AI framework for scientific ML models should additionally seek the model's agreement with physical laws of nature. However, a disconnect can arise between emphasized aspects of Trustworthy AI and the resources available to an AI/ML practitioner who wants to verify trustworthiness. In this talk, an overview of several available Trustworthy AI frameworks is presented in a scientific context. This is supported by a demonstration of some simple, general approaches for quantifying model trustworthiness, and work towards a unifying trustworthy AI framework and toolkit for physical scientists is presented.
–
Publication: Evaluating the Limits of the Physics Learned by a Machine Learning Model by Dale, Li, DeCost, Hattrick-Simpers<br>Loss Landscape Analysis of Model Accuracy by Dale, Li, DeCost, Hattrick-Simpers<br>Trusted AI Toolkit for Scientists (TRAITS) by Dale, Yao, Hattrick-Simpers
Presenters
-
Ashley Dale
University of Toronto
Authors
-
Ashley Dale
University of Toronto
-
Yao Fehlis
Artificial, Inc.
-
Jason Hattrick-Simpers
University of Toronto