DeepLabCut
Type: Software,
Keywords: Deep learning, Computer vision, Keypoints, Pose estimation, Motion capture, Behavior, Behavioral analysis
DeepLabCut: An open source toolbox for robust animal pose estimation
Quantifying behavior is crucial for many applications in neuroscience, genetics, and biology. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time-consuming.
DeepLabCut offers an efficient method for marker-less pose estimation based on transfer learning with deep neural networks that achieves excellent results (i.e. you can match human labeling accuracy) with minimal training data (typically 50-200 frames). Its versatility has been demonstrated by tracking various body parts in multiple species across a broad collection of behaviors. The package is open source, fast, robust, and can be used to compute 3D pose estimates. It is actively maintained.
*Allows pose estimation of user-defined keypoints with little training data (Mathis et al, Nature Neuroscience 2018).
*Supports multi-animal pose tracking including individual identification (Lauer et al, Nature Methods 2022)
*Supports real-time marker-less motion capture for animals (Kane et al, eLife 2020).
*Reaching kinematics, facial key point tracking and pupillometry (e.g., Priviera et al, Nature Protocols 2020).
*Kinematic gait analysis of cheetahs (Joska et al, ICRA, 2021) and mice (Cregg et al, Nature Neuroscience 2020).
*Measurement of social behaviors in mice and marmosets as well as fish schools (Lauer et al, Nature Methods 2022).
*Flies
*Fish
*Mice
*Rats
*Horses
*Cheetahs
*Primates (humans, marmosets, macaques)
*Deep-learning method for detecting and tracking keypoints (motion capture).
*Fully GUI-based with python API.
*Works broadly across species.
*Actively developed (see GitHub and publications).
*Widely used and active user community
*Python experience helpful to use all features.
*Works on all platforms (Windows, Mac, Linux).
*Best performance on a GPU (but not required).
*Mathis, 2018, DeepLabCut: markerless pose estimation of user-defined body parts with deep learning, Nature Neuroscience, https://doi.org/10.1038/s41593-018-0209-y: rdcu.be/4Rep
*Nath, 2019, Using DeepLabCut for 3D markerless pose estimation across species and behaviors., Nature Protocols, https://rdcu.be/bHpHN
*Kane, 2020, Real-time, low-latency closed-loop feedback using markerless posture tracking, eLife, https://elifesciences.org/articles/61909
*Mathis, 2020, Primer on Motion Capture with Deep Learning: Principles, Pitfalls, and Perspectives, Neuron, https://www.sciencedirect.com/science/article/pii/S0896627320307170
*Joska, 2021, Acinoset: A 3D pose estimation dataset and baseline models for cheetahs in the wild, IEEE International Conference on Robotics and Automation (ICRA), https://ieeexplore.ieee.org/document/9561338
*Mathis, 2021, Pretraining boosts out-of-domain robustness for pose estimation, Proceedings of the IEEE/CVF Winter Conference, https://openaccess.thecvf.com/content/WACV2021/html/Mathis_Pretraining_Boosts_Out-of-Domain_Robustness_for_Pose_Estimation_WACV_2021_paper.html
*Lauer, 2022, Multi-animal pose estimation, identification and tracking with DeepLabCut, Nature Methods, https://www.nature.com/articles/s41592-022-01443-0
Alexander Mathis, Assistant Professor
Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne Switerzland
TEAM / COLLABORATOR(S)
Catherine Dulac, Professor, Harvard University
Guoping Feng, Professor, MIT
George Lauder, Professor, Harvard University
Mackenzie Mathis, Assistant Professor, Swiss Federal Institute of Technology Lausanne (EPFL)
Venkatesh N. Murthy, Professor, Harvard University
FUNDING SOURCE(S)
NIH 2R01HD082131
NIH 1R01NS116593-01
Chan Zuckerberg Initiative DAF
Rowland Institute at Harvard University
Swiss National Science Foundation
EPFL