APS Logo

Intrinsic Motivation in Dynamical Control Systems

ORAL · Invited

Abstract

Biological systems often choose actions without an explicit reward signal, a phenomenon known as intrinsic motivation. For example, a human baby learns to control its limbs, right itself, and walk long before getting somewhere starts leading to rewards. The computational principles underlying this behavior remain poorly understood. Here I will discuss our recent work on information-theoretic approach to intrinsic motivation, based on maximizing an agent's empowerment (the mutual information between its past actions and future states). I will show that this approach generalizes previous attempts to formalize intrinsic motivation, introduce a computationally efficient algorithm for its implementation, and connect it to familiar and novel quantities describing a dynamical system. I will illustrate the utility of the method on a few benchmark control problems. I will discuss how connecting this method with modern machine learning could open the door for designing practical artificial, intrinsically motivated agents, resembling animals in their behaviors. This work was conducted in collaboration with Stas Tiomkin, Daniel Poland, and the late Naftali Tishby, to whose memory this talk is dedicated.

Presenters

  • Ilya M Nemenman

    Emory University

Authors

  • Ilya M Nemenman

    Emory University