Skip to main content

Lightweight CLI and Python API for spine-aware human pose estimation in the wild.

Project description

SpinePose Inference Library

Lightweight CLI and Python API for spine-aware human pose estimation in the

SpinePose Homepage Documentation PyPI version PyPI - License

Image Demo Image Demo

SpinePose is an inference library for spine-aware 2D human pose estimation in the wild. It provides a simple CLI and Python API for running inference on images and videos using pretrained models presented in our papers "Towards Unconstrained 2D Pose Estimation of the Human Spine" (CVPR Workshops 2025) and "SIMSPINE: A Biomechanics-Aware Simulation Framework for 3D Spine Motion Annotation and Benchmarking" (CVPR 2026). Our models predict the SpineTrack skeleton hierarchy comprising 37 keypoints, including 9 directly along the spine chain in addition to the standard body joints.

Getting Started

Recommended Python Version: 3.9–3.12

For quick spinal keypoint estimation, we release optimized ONNX models via the spinepose package on PyPI:

pip install spinepose

On Linux/Windows with CUDA available, install the GPU version:

pip install spinepose[gpu]

Using the CLI

usage: spinepose [-h] (--version | --input_path INPUT_PATH) [--vis-path VIS_PATH] [--save-path SAVE_PATH] [--mode {xlarge,large,medium,small}] [--nosmooth] [--spine-only]

SpinePose Inference

options:
  -h, --help            show this help message and exit
  --version, -V         Print the version and exit.
  --input_path INPUT_PATH, -i INPUT_PATH
                        Path to the input image or video
  --vis-path VIS_PATH, -o VIS_PATH
                        Path to save the output image or video
  --save-path SAVE_PATH, -s SAVE_PATH
                        Save predictions in OpenPose format (.json for image or folder for video).
  --mode {xlarge,large,medium,small}, -m {xlarge,large,medium,small}
                        Model size. Choose from: xlarge, large, medium, small (default: medium)
  --nosmooth            Disable keypoint smoothing for video inference (default: enabled)
  --spine-only          Only use 9 spine keypoints (default: use all 37 keypoints)
  --model-version MODEL_VERSION
                        Model version to use. One of: 'latest', 'v2', 'v1' (default: latest)

For example, to run inference on a video and save only spine keypoints in OpenPose format:

spinepose --input_path path/to/video.mp4 --save-path output_path.json --spine-only

This automatically downloads the model weights (if not already present) and outputs the annotated image or video. Use spinepose -h to view all available options, including GPU usage and confidence thresholds.

Using the Python API

import cv2
from spinepose import SpinePoseEstimator

# Initialize estimator (downloads ONNX model if not found locally)
estimator = SpinePoseEstimator(device='cuda')

# Perform inference on a single image
image = cv2.imread('path/to/image.jpg')
keypoints, scores = estimator(image)
visualized = estimator.visualize(image, keypoints, scores)
cv2.imwrite('output.jpg', visualized)

Or, for a simplified interface:

from spinepose.inference import infer_image, infer_video

# Single image inference
results = infer_image('path/to/image.jpg', vis_path='output.jpg')

# Video inference with optional temporal smoothing
results = infer_video('path/to/video.mp4', vis_path='output_video.mp4', use_smoothing=True)

Release Notes

v2.0.2

  • Added detector selection in CLI/API: use --detector rfdetr|yolox (CLI) or detector='rfdetr'|'yolox' (Python).
  • Integrated RF-DETR as an alternative detector with YOLOX-compatible inference interfaces.

v2.0.1

  • Added model family selection in CLI/API.
  • CLI: use --model-version v1|v2|latest (for example, --model-version v1).
  • Python API: use model_version='v1'|'v2'|'latest' (for example, SpinePoseEstimator(model_version='v1')).
  • v1 loads SpineTrack-trained models; v2 and latest load SIMSPINE-trained V2 models (latest is default).

Model Zoo

SpinePose V2

Method Training Data SpineTrack SIMSPINE Usage
APB ARB APS ARS AUC
spinepose_v2_smallSpineTrack
+ SIMSPINE
0.7880.8150.9200.9290.790--mode small --model-version v2
spinepose_v2_medium0.8210.8460.9280.9370.798--mode medium --model-version v2
spinepose_v2_large0.8400.8620.9170.9270.803--mode large --model-version v2

SpinePose V1

Method Training Data SpineTrack SIMSPINE Usage
APB ARB APS ARS AUC
spinepose_v1_smallSpineTrack0.7920.8210.8960.9080.611--mode small --model-version v1
spinepose_v1_medium0.8400.8640.9140.9260.633--mode medium --model-version v1
spinepose_v1_large0.8540.8770.9100.9220.633--mode large --model-version v1
spinepose_v1_xlarge0.7590.8010.8930.910---mode xlarge --model-version v1

Related Publications and Citations

If you use this work in your research, please cite the following related publications:

Towards Unconstrained 2D Pose Estimation of the Human Spine (CVSports @ CVPR 2025)

Home Dataset Conference arXiv

Abstract

We present SpineTrack, the first comprehensive dataset for 2D spine pose estimation in unconstrained settings, addressing a crucial need in sports analytics, healthcare, and realistic animation. Existing pose datasets often simplify the spine to a single rigid segment, overlooking the nuanced articulation required for accurate motion analysis. In contrast, SpineTrack annotates nine detailed spinal keypoints across two complementary subsets: a synthetic set comprising 25k annotations created using Unreal Engine with biomechanical alignment through OpenSim, and a real-world set comprising over 33k annotations curated via an active learning pipeline that iteratively refines automated annotations with human feedback. This integrated approach ensures anatomically consistent labels at scale, even for challenging, in-the-wild images. We further introduce SpinePose, extending state-of-the-art body pose estimators using knowledge distillation and an anatomical regularization strategy to jointly predict body and spine keypoints. Our experiments in both general and sports-specific contexts validate the effectiveness of SpineTrack for precise spine pose estimation, establishing a robust foundation for future research in advanced biomechanical analysis and 3D spine reconstruction in the wild.


SpineTrack Dataset

SpineTrack is available on HuggingFace. The dataset comprises:

  • SpineTrack-Real A collection of real-world images annotated with nine spinal keypoints in addition to standard body joints. An active learning pipeline, combining pretrained neural annotators and human corrections, refines keypoints across diverse poses.

  • SpineTrack-Unreal A synthetic subset rendered using Unreal Engine, paired with precise ground-truth from a biomechanically aligned OpenSim model. These synthetic images facilitate pretraining and complement real-world data.

To download:

git lfs install
git clone https://huggingface.co/datasets/saifkhichi96/spinetrack

Alternatively, use wget to download the dataset directly:

wget https://huggingface.co/datasets/saifkhichi96/spinetrack/resolve/main/annotations.zip
wget https://huggingface.co/datasets/saifkhichi96/spinetrack/resolve/main/images.zip

In both cases, the dataset will download two zipped folders: annotations (24.8 MB) and images (19.4 GB), which can be unzipped to obtain the following structure:

spinetrack
├── annotations/
│   ├── person_keypoints_train-real-coco.json
│   ├── person_keypoints_train-real-yoga.json
│   ├── person_keypoints_train-unreal.json
│   └── person_keypoints_val2017.json
└── images/
    ├── train-real-coco/
    ├── train-real-yoga/
    ├── train-unreal/
    └── val2017/

All annotations are in COCO format and can be used with standard pose estimation libraries.

Evaluation

We benchmark SpinePose V1 models against state-of-the-art lightweight pose estimation methods on COCO, Halpe, and our SpineTrack dataset. The results are summarized below, with SpinePose models highlighted in gray. Only 26 body keypoints are used for Halpe evaluations.

Method Train Data Kpts COCO Halpe26 Body Feet Spine Overall Params (M) FLOPs (G)
APAR APAR APAR APAR APAR APAR
SimCC-MBV2COCO1762.067.833.243.972.175.60.00.00.00.00.10.12.290.31
RTMPose-tBody82665.971.368.073.276.980.074.179.70.00.015.817.93.510.37
RTMPose-sBody82669.774.772.076.780.983.678.983.50.00.017.219.45.700.70
SpinePose-sSpineTrack3768.273.170.675.279.182.177.582.989.690.784.286.25.980.72
SimCC-ViPNASCOCO1769.575.536.949.779.683.00.00.00.00.00.20.28.650.80
RTMPose-mBody82675.180.076.781.385.587.984.188.20.00.019.421.413.931.95
SpinePose-mSpineTrack3773.077.575.079.284.086.483.587.491.492.588.089.514.341.98
RTMPose-lBody82676.981.578.482.986.889.286.990.00.00.020.022.028.114.19
RTMW-mCocktail1413373.878.763.868.584.386.783.087.20.00.06.27.632.264.31
SimCC-ResNet50COCO1772.178.238.751.681.885.20.00.00.00.00.20.236.755.50
SpinePose-lSpineTrack3775.279.577.081.185.487.785.589.291.092.288.490.028.664.22
SimCC-ResNet50*COCO1773.479.039.852.483.286.20.00.00.00.00.30.343.2912.42
RTMPose-x*Body82678.883.480.084.488.690.688.491.40.00.021.022.950.0017.29
RTMW-l*Cocktail1413375.680.465.470.186.088.385.689.20.00.08.18.157.207.91
RTMW-l*Cocktail1413377.282.366.671.887.389.988.391.30.00.08.68.657.3517.69
SpinePose-x*SpineTrack3775.980.177.681.886.388.586.389.789.391.088.989.950.6917.37

For evaluation instructions and to reproduce the results reported in our paper, please refer to the evaluation branch of this repository:

git clone https://github.com/dfki-av/spinepose.git
cd spinepose
git checkout evaluation

The README in the evaluation branch provides detailed steps for setting up the evaluation environment and running the evaluation scripts on the SpineTrack dataset.

Citation

@InProceedings{Khan_2025_CVPRW,
    author    = {Khan, Muhammad Saif Ullah and Krau{\ss}, Stephan and Stricker, Didier},
    title     = {Towards Unconstrained 2D Pose Estimation of the Human Spine},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2025},
    pages     = {6171-6180}
}
SIMSPINE: A Biomechanics-Aware Simulation Framework for 3D Spine Motion Annotation and Benchmarking (CVPR 2026)

SIMSPINE Homepage SIMSPINE Dataset Conference arXiv

Abstract

Modeling spinal motion is fundamental to understanding human biomechanics, yet remains underexplored in computer vision due to the spine's complex multi-joint kinematics and the lack of large-scale 3D annotations. We present a biomechanics-aware keypoint simulation framework that augments existing human pose datasets with anatomically consistent 3D spinal keypoints derived from musculoskeletal modeling. Using this framework, we create the first open dataset, named SIMSPINE, which provides sparse vertebra-level 3D spinal annotations for natural full-body motions in indoor multi-camera capture without external restraints. With 2.14 million frames, this enables data-driven learning of vertebral kinematics from subtle posture variations and bridges the gap between musculoskeletal simulation and computer vision. In addition, we release pretrained baselines covering fine-tuned 2D detectors, monocular 3D pose lifting models, and multi-view reconstruction pipelines, establishing a unified benchmark for biomechanically valid spine motion estimation. Specifically, our 2D spine baselines improve the state-of-the-art from 0.63 to 0.80 AUC in controlled environments, and from 0.91 to 0.93 AP for in-the-wild spine tracking. Together, the simulation framework and SIMSPINE dataset advance research in vision-based biomechanics, motion analysis, and digital human modeling by enabling reproducible, anatomically grounded 3D spine estimation under natural conditions.

Citation

@InProceedings{Khan_2026_CVPR,
    author    = {Khan, Muhammad Saif Ullah and Stricker, Didier},
    title     = {SIMSPINE: A Biomechanics-Aware Simulation Framework for 3D Spine Motion Annotation and Benchmarking},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2026},
    pages     = {}
}

License

This project is released under the CC-BY-NC-4.0 License. Commercial use is prohibited, and appropriate attribution is required for research or educational applications.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spinepose-2.0.2.tar.gz (43.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

spinepose-2.0.2-py3-none-any.whl (43.5 kB view details)

Uploaded Python 3

File details

Details for the file spinepose-2.0.2.tar.gz.

File metadata

  • Download URL: spinepose-2.0.2.tar.gz
  • Upload date:
  • Size: 43.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for spinepose-2.0.2.tar.gz
Algorithm Hash digest
SHA256 c47f52e34177fc0da74aaf20a9341d24ebfe553158dac054b4b4f4a97e5c9972
MD5 89bf22cf590240765305fa408a2202a6
BLAKE2b-256 d3a549d8031ee70307d509368dc019b358bdc78ff8aff66b16be33f6e7650a65

See more details on using hashes here.

Provenance

The following attestation bundles were made for spinepose-2.0.2.tar.gz:

Publisher: python-publish.yml on dfki-av/spinepose

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file spinepose-2.0.2-py3-none-any.whl.

File metadata

  • Download URL: spinepose-2.0.2-py3-none-any.whl
  • Upload date:
  • Size: 43.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for spinepose-2.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 4633aa166bc540b0b2e1ae1b0e10822cd779647b29b76b0e9202839019af029a
MD5 e6c888dafa3f4d8a4ff28a50aa0d34bf
BLAKE2b-256 7f188dcb8cd66cca86545415a32193bde9ad02dcf4e2bb6218fd71a824715d9a

See more details on using hashes here.

Provenance

The following attestation bundles were made for spinepose-2.0.2-py3-none-any.whl:

Publisher: python-publish.yml on dfki-av/spinepose

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page