Lightning Pose is an end-to-end toolkit designed for robust multi-view and single-view animal pose estimation using advanced transformer architectures. It leverages Multi-View Transformers and patch-masking training to learn geometric relationships between views, resulting in strong performance on occlusions Aharon, Lee et al. 2025. For single-view datasets it leverages temporal context and learned plausibility constraints for strong performance in challenging scenarios Biderman, Whiteway et al. 2024, Nature Methods. It has a rich GUI that supports the end-to-end workflow: labeling, model management, and evaluation.
Lightning-pose requires a Linux or WSL environment with an NVIDIA GPU.
For users without access to a local NVIDIA GPU, it is highly recommended to use the Lightning AI cloud environment, which provides persistent, browser-based "Studios" with on-demand access to powerful GPUs and pre-configured CUDA environments.
Install dependencies:
sudo apt install ffmpeg
# Verify nvidia-driver with CUDA 12+
nvidia-smiIn a clean python virtual environment (conda or other virtual environment manager), run:
pip install lightning-pose lightning-pose-appThat's it! To run the app:
litpose run_appPlease see the installation guide for more detailed instructions, and feel free to reach out to us on Discord in case of any hiccups.
To get started with Lightning Pose, follow the guides on our documentation:
- Create your first project using the app
- or follow the CLI User Guides (Singleview, Multiview).
The Lightning Pose team also actively develops the Ensemble Kalman Smoother (EKS), a simple and performant post-processor that works with any pose estimation package including Lightning Pose, DeepLabCut, and SLEAP.
Lightning Pose is primarily maintained by Karan Sikka (Columbia University), Matt Whiteway (Columbia University), and Dan Biderman (Stanford University).
Lightning Pose is under active development and we welcome community contributions. Whether you want to implement some of your own ideas or help out with our development roadmap, please get in touch with us on Discord (see contributing guidelines here).
