APS Logo

Lines of Thought in Large Language Models

ORAL

Abstract

How does a Large Language Model think? In other words, how does it process the prompt "Once upon a time" to assemble word by word a respectable fairy tale? What we know, by design, is that a piece of text is mapped into a set of high-dimensional vectors, which are then transported across their embedding space through successive transformer layers. The resulting high-dimensional trajectories realize different contextualization, or 'thinking', steps, and fully determine the output probability distribution. We aim to characterize the statistical properties of ensembles of these 'lines of thought.' We observe that independent trajectories cluster along a low-dimensional, non-Euclidean manifold, and that their path can be well approximated by a Langevin equation with few parameters extracted from data. We find it remarkable that the vast complexity of such large models can be reduced to a much simpler form, and we reflect on implications.

Publication: https://arxiv.org/abs/2410.01545

Presenters

  • Raphael Sarfati

    Cornell University

Authors

  • Raphael Sarfati

    Cornell University

  • Toni Jianbang Liu

    Cornell University

  • Nicolas Boulle

    Imperial College London

  • Christopher Earls

    Cornell University, Cornell university