APS Logo

A symbolic systen that synthesises an internal model of an algebraic theory of the data and prior knowlwedge

ORAL

Abstract

Symbolic approaches to AI excel at mathematical transparency and reasoning, but without learning from data they have limited contact with the real world. Here we propose an approach inspired in Model Theory that combines the mathematical transparency of symbolic systems with the ability to learn internal models with no use of optimization. In a first step, we embed the properties of our data and prior formal knowledge into an algebraic theory consisting of first-order sentences using symbols that refer to objects, parts of objects or abstract concepts. In a second step, the system learns by synthesizing some internal symbols, or atoms, that do not refer directly to items in the world but that are instead a model of the algebraic theory. Specifically, we are interested in the freest atomized model that, among all possible models of the algebraic theory, is the one with more negative sentences. We prove that this model guarantees to find a rule in the data if it exists and we have enough data. The subset of atoms of the freest model that is most stable during training is shown to be a generalizing model. It can also obtain for small datasets an approximation to or even the exact underlying rule that the freest model finds in the large data limit. We believe that these rule-seeking models open many new possibilities at the mathematical, cognitive and practical levels.

Publication: None

Presenters

  • Gonzalo de Polavieja

    Centro Champalimaud

Authors

  • Gonzalo de Polavieja

    Centro Champalimaud