Skip to content

farhanaugustine/IntegraPose

Repository files navigation

DOI

Behavior & Pose Analytics β€” in one desktop application

Computational ethology has matured into a rich ecosystem β€” DeepLabCut and SLEAP for pose, B-SOiD and VAME for unsupervised discovery, BORIS for manual coding, commercial suites for regulated end-to-end work. Each is excellent at what it does; the friction usually lives in the seams between them. IntegraPose addresses that gap with one desktop application that handles pose estimation, multi-animal tracking, ROI- and bout-level analytics, and optional sub-behavior discovery β€” backed by a curated plugin ecosystem for the cases the core workflow doesn't cover. The aim is to give labs without dedicated engineering support a unified, reproducible path from raw video to defensible analytics, with a single time-locked stream of pose and behavior data underneath.

πŸ“š Documentation

The full manual lives under docs/ and is built with MkDocs Material. The hosted documentation is deployed with GitHub Pages from the MkDocs source in docs/.

  • πŸ“– Comprehensive User Guide β€” covers the GUI, every tab, the plugin ecosystem, advanced YOLO model customization, and example backbones for users who want more control over the model.
  • πŸš€ Quick Start β€” get from a fresh install to a first run in minutes.
  • πŸ› οΈ Installation β€” environment setup, optional plugin stack, Albumentations install path.

To build the docs locally:

pip install mkdocs-material
mkdocs serve

Then open locally via http://127.0.0.1:8000/

✨ What IntegraPose Covers

Area What you do Typical output
Data preparation Extract frames, crop videos (optional), organize inputs Clean training or inference-ready media
Project setup Define keypoints, behaviors, skeleton, dataset paths Reusable project scaffold and dataset.yaml
Pose training Train YOLO pose checkpoints from the GUI Weights, metrics, exportable model artifacts
Custom architectures Edit the model .yaml and train custom backbones / necks / heads via the CLI Tailored YOLO architectures for your assay
Inference Run pose or detection inference on videos or folders YOLO labels, optional media, motion summaries
Bout analytics Compute bouts, ROI metrics, object interactions, review-ready exports CSV, Excel, reviewed analytics outputs
Batch processing Run inference + analytics across many videos with shared settings Per-video output folders and manifests
Sub-behavior discovery Split known YOLO classes into the sub-behaviors actually present in your data, score them, name them, optionally export classifier-ready clip folders Per-frame sub-cluster labels, bouts CSV, candidate scores, named clip folders

🧩 Plugin Ecosystem

IntegraPose has some curated plugins that extend the core 7-tab workflow without bloating it. Plugins are opt-in (Plugins β†’ Manage Plugins…) and launch in their own windows. These are intended to extend the functionality of IntegraPose to cover more use cases.

Plugin status β€” research in progress. The plugin ecosystem evolves with active research. Some plugins are stable, others are works in progress, and the set may change as research priorities shift. Pin to a commit hash if you depend on a specific plugin for an in-flight project. See the Plugin Catalog for current per-plugin guides.

Category Plugins
Dataset creation Assisted Pose Curation Β· AutoLabel Forge (GroundingDINO + SAM) Β· Dataset Augmentor Lab
Behavior & sequence modeling BehaviorScope Toolkit Β· Tandem YOLO Toolkit
Domain-specific analytics Gait & Kinematic Dashboard - Zone Counter
Exploration & review EDA Tool

Full catalog with per-plugin guides: Plugin Catalog.

🎯 What You Can Do With IntegraPose

  • Gait & kinematic analysis β€” analyze animal gaits to extract stride length, speed, limb angles, and other locomotion signatures. A simple plugin is available for this purpose. In the future, more advanced standalone project for gait analysis using YOLO-pose model is available Gait_Analysis_YOLO.
  • Real-time behavior application β€” run closed-loop experiments, biofeedback, and live monitoring systems. Build your own plugins and integrate them as you wish.
  • Rodent assay workflows β€” analyze rodent behavior using bouts, ROI occupancy, and inter-animal interactions, offline or real-time with a webcam.
  • Sports & movement analytics β€” analyze athletic performance, technique, and rehabilitation using pose estimation and movement analysis.

πŸ› οΈ Install (Conda env is recommended)

  1. Install Python 3.9–3.11 (3.11 recommended).
  2. Install the PyTorch build that matches your hardware (pytorch.org).
  3. From the repository root:
pip install .

For the optional plugin stack:

pip install ".[plugins]"

For a contributor environment with dev tools and the plugin stack (i.e., install everything):

pip install ".[dev]"

For Albumentations support (kept separate so it doesn't replace the GUI's pinned OpenCV):

python tools/install_albumentations_gui.py

Recommended order on a fresh conda environment:

  1. create a new environment
  2. start with pytorch installation first
3. pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu
4. pip install ".[plugins]"
5. python tools/install_albumentations_gui.py

πŸš€ Launch

python -m integra_pose
# or
integrapose

πŸ—ΊοΈ Workflow At A Glance

Step Place in the app
1 Data Preprocessing
2 Setup & Annotation
3 Model Training
4 Inference
5 Webcam Inference
6 Bout Analytics
7 Behavior Clustering
Supporting tools Log Console, Batch Processing Wizard, optional plugins
Raw videos
  β†’ Data Preprocessing
  β†’ Setup & Annotation
  β†’ Model Training (or imported model, or custom architecture)
  β†’ Inference or Batch Processing Wizard
  β†’ Bout Analytics
  β†’ Behavior Clustering (optional)

🧭 Pick Your Starting Path

Goal Best guide
Already have a detection model and want ROI/bout analytics Detection-Only Workflow
Want a full pose workflow inside IntegraPose Pose Model Workflow
Process many videos at once Batch Processing Wizard
Design a custom YOLO architecture for your assay Customizing the YOLO Model
Browse optional plugins Plugin Catalog

πŸ–ΌοΈ Showcase

Examples of IntegraPose in action β€” simultaneous keypoint tracking and behavior classification.

OpenField Video Source Behaviors
Github_BehaviorDepot_1 BehaviorDEPOT Walking, Wall-Rearing / Supported Rearing
Github_BehaviorDepot_2 BehaviorDEPOT Walking, Wall-Rearing / Supported Rearing
Github_BehaviorDepot_3 BehaviorDEPOT Walking, Grooming
Github_DeerMice_4 Temporal_Behavior_Analysis Exploring/Walking, Wall-Rearing / Supported Rearing
Github_DeerMice_5 Temporal_Behavior_Analysis Wall-Rearing / Supported Rearing, Jump
Github_BehaviorDepot_6 BehaviorDEPOT Ambulatory/Walking, Object Exploration, Object Mounting
Github_BehaviorDepot_7 BehaviorDEPOT Ambulatory/Walking, Object Exploration
Github_C57B_8 Self Ambulatory/Walking, Nose-Poking, Wall-Rearing / Supported Rear
Github_CHKO_9 Self Ambulatory/Walking, Wall-Rearing / Supported Rear

πŸ“Œ Status, Stability & amp; Roadmap

IntegraPose is active research software. The core workflow (Tabs 1–7) is stable enough for ongoing lab use; individual features and plugins evolve as research needs change. Feel free to fork and modify for your specific needs, but be aware that updates may introduce breaking changes. We encourage you to submit a pull request to share your improvements with the community!

More Specifically:

  • The set of bundled plugins reflects the current shipped state. Plugins may be added, modified, deprecated, or removed at any time without notice.
  • Public interfaces (CLI commands, file formats, project-bundle layouts, plugin APIs) are subject to change while the project is iterating. If you depend on a specific plugin or output format for an in-flight project, pin to a commit hash so a future change does not surprise your pipeline.
  • Documentation, tutorials, and example outputs are kept in sync with the current state of main. Older guides may reference removed features; the User Guide under docs/ is the source to look into.
  • No warranties, express or implied, are provided. See the AGPL-3.0 license for the full liability disclaimer.

πŸ“ Citation

If IntegraPose contributes to your analysis pipeline, please cite:

Augustine, F., O'Sullivan, S., Murray, V., Ogura, T., Lin, W., & Singer, H. S. (2025). IntegraPose: A unified framework for simultaneous pose estimation and behavior classification. Neuroscience, 590, 1–22. https://doi.org/10.1016/j.neuroscience.2025.10.020

DOI for the software release: 10.5281/zenodo.15565090.

πŸ“„ License

IntegraPose is provided under the GNU Affero General Public License v3.0 (AGPL-3.0).

What this means for you

  • Using the app (research, analysis, publications): Totally fine. You can run IntegraPose internally and publish results however you like; the AGPL doesn't limit what you learn or publish.
  • Modifying or redistributing IntegraPose: If you share the altered program or host it for others (e.g., a web service), you must provide your changes' source code under AGPL too.
  • Integrating Ultralytics: The AGPL choice keeps the alignment with Ultralytics' AGPL license. If your group has a commercial exception from Ultralytics, you can apply that to IntegraPose as well.

Need more detail? See GNU's AGPL overview.

πŸ™ Acknowledgments

IntegraPose builds on the open-source ecosystem. We extend our gratitude to:

  • The Ultralytics team for the YOLO training and inference backbone.
  • The Roboflow Supervision team for visualization and overlay utilities.
  • PyTorch, OpenCV, NumPy, SciPy, Pandas, Matplotlib, Pillow, HDBSCAN, UMAP, and the broader scientific Python community.
  • Public datasets that make benchmarking possible β€” including BehaviorDEPOT, Temporal_Behavior_Analysis (used in the showcase above), and several MARS Caltech multi-mouse and mouse-strain datasets available through Harvard Dataverse (including the Kumar Lab Mouse Strain Survey Dataset, EZM video logs, EPM video logs, and Y-Maze video logs).

About

IntegraPose, an open-source toolkit for training single, end-to-end models that perform simultaneous behavioral classification and keypoint estimation. IntegraPose leverages YOLO-Pose architectures and transfer learning, enabling researchers to create custom models via a streamlined annotation workflow.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages