Principles underlying parallel channels in visual motion estimation
ORAL · Invited
Abstract
Animals have evolved visual circuits that detect motion, a crucial task for survival. Canonical models compute motion signals from spatiotemporal correlations in visual stimuli. However, in Drosophila and other animals, visual systems divide motion signals into ON/OFF channels with opposing direction selectivity, an intermediate representation not explained by canonical models. Here, we apply the information bottleneck (IB) framework to investigate parallel channels encoding visual motion. First, we used the information bottleneck (IB) method to transform visual inputs into optimized encodings under various tradeoff conditions and studied the structure of the results. Second, since canonical models were suboptimal to the information bottleneck bound, we explored how optimal solutions differed from canonical models. Third, we applied variational information bottleneck (VIB), a deep learning approximation to IB, to find interpretable, continuous motion channels. Motion channels resembling Drosophila’s arise naturally in a highly compressed regime, where obtaining additional velocity information requires disproportionally more information about the stimulus. This suggests motion channels in Drosophila favor compression over retaining more velocity information. In summary, by applying the IB framework and the compression-retention tradeoff, this work reveals potential explanations for the structure underlying parallel channels encoding visual motion.
–
Presenters
-
Damon Clark
Yale University
Authors
-
Damon Clark
Yale University