Computing with scale-invariant neural representations

ORAL

Abstract

The Weber-Fechner law is perhaps the oldest quantitative relationship in psychology. Consider the problem of the brain representing a function $f(x)$. Different neurons have receptive fields that support different parts of the range, such that the $i$th neuron has a receptive field at $x_i$. Weber-Fechner scaling refers to the finding that the width of the receptive field scales with $x_i$ as does the difference between the centers of adjacent receptive fields. Weber-Fechner scaling is exponentially resource-conserving. Neurophysiological evidence suggests that neural representations obey Weber-Fechner scaling in the visual system and perhaps other systems as well. We describe an optimality constraint that is solved by Weber-Fechner scaling, providing an information-theoretic rationale for this principle of neural coding. Weber-Fechner scaling can be generated within a mathematical framework using the Laplace transform. Within this framework, simple computations such as translation, correlation and cross-correlation can be accomplished. This framework can in principle be extended to provide a general computational language for brain-inspired cognitive computation on scale-invariant representations.

Authors

  • Marc Howard

    Boston Univ, Boston University

  • Karthik Shankar

    Boston Univ