Keeping up with LLMs
ORAL · Invited
Abstract
LLMs and their reasoning-optimized versions (LRMs) have been improving their abilities at a headlong pace. For many knowledge distillation and code generation and translation tasks, they are already very capable scientific assistants. This capability extends to carrying out applied mathematics problems at a reasonable level by exploiting symbolic engines and computational tools. Additionally, from compact prompts, an LRM can generate and marshall a complex computational workflow using smaller, mutable domain-specific models and tools -- which could be HPC codes in their open right -- that are picked up and modified as needed for the overall task. Contributing to the design of these AI-enhanced workflows and methods for establishing the robustness of the results obtained is an exciting new direction for computational physicists to explore.
–
Presenters
-
Salman Habib
Argonne National Laboratory
Authors
-
Salman Habib
Argonne National Laboratory