Lessons from scale in large language models and quantitative reasoning
ORAL · Invited
Abstract
Large language models trained on diverse training data have shown impressive results on many tasks involving natural language -- in many cases matching or exceeding human performance. Some measures of progress exhibit remarkably robust power-law improvement over many orders of magnitude in dataset, model and compute scale, while other capabilities remain difficult to extrapolate. One domain which has traditionally been challenging for such models is multi-step quantitative reasoning for mathematics and science. I will discuss recent progress attempting to understand and extrapolate model capabilities with scale and Minerva, a large language model designed to perform multi-step STEM problem solving.
–
Presenters
-
Ethan Dyer
Google Research
Authors
-
Ethan Dyer
Google Research