APS Logo

Automatic Neuron Correspondence Prediction In <i>C.elegans </i>With Deep Learning

ORAL

Abstract

The neurons in C. elegans are well characterized, named, and have stereotyped locations. The ability to find corresponding neurons across animals is needed to compare neural signals, study variability, or collect statistics. Variability in neural position across animals makes it hard to find this correspondence. We present a deep learning method based on the transformer architecture for finding neural correspondence and apply it to the brain of C. elegans. The model learns to extract features that capture relative spatial organization among pairs of neurons within and across individuals. The model is trained exclusively on empirically derived synthetic data and it is used to predict correspondence between real animals via transfer learning. When compared against held-out human-annotated ground-truth Neuropal data, the model finds the correct correspondence for 65.8% of labeled neurons. With added genetically encoded color labeling, the model finds correspondence for 78.1% of labeled neurons. Unlike previous methods, this approach requires no human annotation, straightening, or prepossessing of the animal's pose. The model is parallelable and much faster than previous methods.

Presenters

  • Xinwei Yu

    Princeton University, Physics, Princeton University

Authors

  • Xinwei Yu

    Princeton University, Physics, Princeton University

  • Matthew S Creamer

    Princeton University

  • Andrew M Leifer

    Physics, Princeton University, Princeton University, Physics and Princeton Neuroscience Institute, Princeton University