When it is more powerful to infer than to see
ORAL
Abstract
Understanding how people process and learn information remains an elusive goal in the study of human statistical learning. In particular, many recent studies have probed the ways in which humans process information that is organized as a network. Previous work has demonstrated that humans learn networks through building internal mental models of network structure. However, these mental models are often inaccurate due to limitations in human information processing. These limitations raise a clear question: Given a target network that one wishes for a human to learn, how should a network presented to the human be designed so as to correct for errors in human learning? To answer this question, we study the optimization of learnability in modular and lattice graphs. We find that the learnability of both networks can be enhanced by reinforcing connections within modules or small clusters. Then, we extend our analyses to networks created from generative models, and finally to real-world networks. Overall, our findings suggest that the accuracy of human network learning can be significantly enhanced through purposeful misrepresentation of presented network structures.
–
Presenters
-
William Qian
University of Pennsylvania
Authors
-
William Qian
University of Pennsylvania
-
Christopher Lynn
University of Pennsylvania, City University of New York
-
Andrei A. Klishin
University of Michigan, University of Pennsylvania, Bioengineering, University of Pennsylvania
-
Danielle Bassett
University of Pennsylvania, Department of Bioengineering, University of Pennsylvania, Physics, University of Pennsylvania