Multi-animal pose tracking using deep neural networks
ORAL
Abstract
Dissecting behavior in freely moving animals at the fast timescales requires rich representations of their motor dynamics. Recently, we developed a method to automate the estimation of animal pose from videos using deep neural networks (Pereira et al., 2019). This method, termed LEAP, detects body part positions in single animal videos. Extending these techniques to a multi-animal context presents technical challenges, such as assigning body part positions to the correct animal. Here we present a new framework we term SLEAP (Social LEAP Estimates Animal Poses) that can explicitly model the relationship between body parts, enabling accurate multi-animal pose estimation. The framework implements multiple neural network meta-architectures which we empirically evaluate on tracking sub-tasks. We demonstrate the generalizability of this framework by applying this technique to videos of a variety of animals, including a high-resolution dataset of freely interacting fruit flies to construct a map of postural dynamics during courtship.
–
Presenters
-
Talmo Pereira
Princeton University, Princeton Neuroscience Institute, Princeton University
Authors
-
Talmo Pereira
Princeton University, Princeton Neuroscience Institute, Princeton University
-
Shruthi Ravindranath
Princeton Neuroscience Institute, Princeton University
-
Nathaniel Tabris
Princeton Neuroscience Institute, Princeton University
-
Junyu Li
Princeton Neuroscience Institute, Princeton University
-
Mala Murthy
Princeton Neuroscience Institute, Princeton University
-
Joshua Shaevitz
Princeton University, Physics and the Lewis-Sigler Institute, Princeton University, Physics, Princeton University