Skip to main content

Markerless video-based gait analysis toolkit

Project description

myogait

Markerless video-based gait analysis toolkit.

CI PyPI Python 3.10|3.11|3.12 License: MIT Tests Downloads

Author: Frederic Fer, Institut de Myologie (f.fer@institut-myologie.org)


Features

  • Multi-model pose extraction — MediaPipe, YOLO, Sapiens (3 sizes), ViTPose, RTMW, HRNet, RTMPose
  • Sapiens depth estimation — monocular relative depth per landmark
  • Sapiens body segmentation — 28-class body-part labels per landmark
  • Butterworth, Savitzky-Golay, and Kalman filtering
  • Joint angle computation (sagittal vertical axis, sagittal classic)
  • 4 event detection methods (Zeni, crossing, velocity, O'Connor)
  • Gait cycle segmentation and normalization
  • Spatio-temporal analysis (cadence, stride time, stance%)
  • Symmetry and variability indices
  • Step length and walking speed estimation
  • Harmonic ratio and regularity index
  • Advanced pathology detection (Trendelenburg, spastic, steppage, crouch)
  • Biomechanical validation against physiological ranges
  • Publication-quality matplotlib plots
  • Multi-page PDF clinical report
  • Export to CSV, OpenSim (.mot/.trc), Excel
  • YAML/JSON pipeline configuration
  • CLI with extract, run, analyze, batch, download, and info commands

Installation

pip install myogait

Install with a specific pose estimation backend:

pip install myogait[mediapipe]   # MediaPipe (lightweight, CPU)
pip install myogait[yolo]        # YOLO via Ultralytics
pip install myogait[sapiens]     # Sapiens (Meta AI) + Intel Arc GPU support
pip install myogait[vitpose]     # ViTPose via HuggingFace Transformers
pip install myogait[rtmw]        # RTMW 133-keypoint whole-body
pip install myogait[mmpose]      # HRNet / RTMPose via MMPose
pip install myogait[all]         # All backends

Supported Pose Estimation Backends

Backend Install Notes
MediaPipe pip install myogait[mediapipe] Fast, good for real-time. 33 landmarks
YOLOv8-Pose pip install myogait[yolo] Fast, robust. 17 COCO keypoints
ViTPose pip install myogait[vitpose] State-of-the-art accuracy
Sapiens pip install myogait[sapiens] Meta model, depth estimation
RTMPose/RTMLib pip install myogait[rtmw] Real-time, ONNX optimized
MMPose pip install myogait[mmpose] Academic reference, many models

GPU Support

All models support NVIDIA CUDA GPUs. Sapiens and ViTPose also support Intel Arc / Xe GPUs (via intel-extension-for-pytorch).

Model CUDA Intel Arc (XPU) CPU
MediaPipe yes
YOLO yes yes
Sapiens (pose, depth, seg) yes yes yes
ViTPose yes yes yes
RTMW yes (onnxruntime) yes
HRNet / RTMPose yes yes

Supported Pose Models

Name Keypoints Format Backend Install
mediapipe 33 MediaPipe Google MediaPipe Tasks (heavy) pip install myogait[mediapipe]
yolo 17 COCO Ultralytics YOLOv8-Pose pip install myogait[yolo]
sapiens-quick 17 COCO + 308 Goliath COCO Meta Sapiens 0.3B (336M params) pip install myogait[sapiens]
sapiens-mid 17 COCO + 308 Goliath COCO Meta Sapiens 0.6B (664M params) pip install myogait[sapiens]
sapiens-top 17 COCO + 308 Goliath COCO Meta Sapiens 1B (1.1B params) pip install myogait[sapiens]
vitpose 17 COCO ViTPose-base (HuggingFace) pip install myogait[vitpose]
vitpose-large 17 COCO ViTPose+-large (HuggingFace) pip install myogait[vitpose]
vitpose-huge 17 COCO ViTPose+-huge (HuggingFace) pip install myogait[vitpose]
rtmw 17 COCO + 133 whole-body COCO RTMW via rtmlib pip install myogait[rtmw]
hrnet 17 COCO HRNet-W48 via MMPose pip install myogait[mmpose]
mmpose 17 COCO RTMPose-m via MMPose pip install myogait[mmpose]

Sapiens Auxiliary Models

In addition to pose, Sapiens provides depth estimation and body-part segmentation. These run alongside any Sapiens pose model to enrich per-landmark data.

Depth Estimation

Monocular relative depth. Per-landmark depth values (closer = higher) are stored in each frame as landmark_depths.

Size HuggingFace repo
0.3b facebook/sapiens-depth-0.3b-torchscript
0.6b facebook/sapiens-depth-0.6b-torchscript
1b facebook/sapiens-depth-1b-torchscript
2b facebook/sapiens-depth-2b-torchscript

Body-Part Segmentation

28-class segmentation (face, torso, arms, legs, hands, feet, clothing...). Per-landmark body-part labels are stored in each frame as landmark_body_parts.

Size mIoU HuggingFace repo
0.3b 76.7 facebook/sapiens-seg-0.3b-torchscript
0.6b 77.8 facebook/sapiens-seg-0.6b-torchscript
1b 79.9 facebook/sapiens-seg-1b-torchscript

Usage

# Pose + depth + segmentation in one pass
myogait extract video.mp4 -m sapiens-quick --with-depth --with-seg

# Or via Python
from myogait import extract
data = extract("video.mp4", model="sapiens-top", with_depth=True, with_seg=True)

Experimental AIM Benchmark Input Degradation

myogait includes an experimental pre-extraction degradation layer for robustness benchmarking in the AIM benchmark context only. By default, it is disabled and applies no modification.

Python API:

from myogait import extract

data = extract(
    "video.mp4",
    model="mediapipe",
    experimental={
        "enabled": True,
        "target_fps": 15.0,    # frame-rate degradation
        "downscale": 0.6,      # spatial degradation
        "contrast": 0.7,       # contrast degradation
        "aspect_ratio": 1.2,   # non-square stretch
        "perspective_x": 0.2,  # side-like perspective skew
        "perspective_y": 0.1,  # forward/backward tilt skew
    },
)

CLI:

myogait extract video.mp4 \
  --exp-enable \
  --exp-target-fps 15 \
  --exp-downscale 0.6 \
  --exp-contrast 0.7 \
  --exp-aspect-ratio 1.2 \
  --exp-perspective-x 0.2 \
  --exp-perspective-y 0.1

Experimental VICON Alignment (Single Video)

For AIM benchmark workflows, you can align one myogait result with one VICON trial and attach ground-truth comparison metrics to the JSON. This is experimental and disabled by default in the standard pipeline.

from myogait import run_single_trial_vicon_benchmark

data = run_single_trial_vicon_benchmark(
    data,                               # myogait result dict
    trial_dir="/path/to/trial_01_1",   # contains *.mat files
    vicon_fps=200.0,
    max_lag_seconds=10.0,
)

# Results in data["experimental"]["vicon_benchmark"]

Experimental Single-Pair Benchmark Runner (AIM Only)

You can run a full benchmark grid on one (video, vicon_trial) pair and generate:

  • one JSON per run (<output_dir>/runs/*.json)
  • one CSV summary (<output_dir>/benchmark_summary.csv)
  • one manifest (<output_dir>/benchmark_manifest.json)
from myogait import run_single_pair_benchmark

manifest = run_single_pair_benchmark(
    video_path="video.mp4",
    vicon_trial_dir="/path/to/trial_01_1",
    output_dir="./benchmark_out",
    benchmark_config={
        "models": ["mediapipe", "yolo"],         # or "all"
        "event_methods": "all",                  # or ["zeni", "gk_zeni", ...]
        "normalization_variants": [
            {"name": "none", "enabled": False, "kwargs": {}},
            {"name": "butterworth", "enabled": True, "kwargs": {"filters": ["butterworth"]}},
        ],
        "degradation_variants": [
            {"name": "none", "experimental": {"enabled": False}},
            {"name": "lowres", "experimental": {"enabled": True, "downscale": 0.7, "target_fps": 15.0}},
        ],
        "continue_on_error": True,
    },
)

print(manifest["summary_csv"])

This runner is experimental and intended only for AIM benchmark workflows.

References

ViTPose

Vision Transformer for pose estimation (NeurIPS 2022). Top-down architecture with RT-DETR person detector. Fully pip-installable via HuggingFace Transformers.

Variant Size HuggingFace repo
vitpose (base) 90M usyd-community/vitpose-base-simple
vitpose-large 400M usyd-community/vitpose-plus-large
vitpose-huge 900M usyd-community/vitpose-plus-huge
  • Paper: Xu et al., ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation, NeurIPS 2022 — arXiv:2204.12484
  • Paper: Xu et al., ViTPose++: Vision Transformer for Generic Body Pose Estimation, TPAMI 2024 — arXiv:2212.04246

RTMW (Whole-Body 133 Keypoints)

Real-Time Multi-person Whole-body estimation: body (17) + feet (6) + face (68) + hands (42) = 133 keypoints. Uses rtmlib (lightweight ONNX inference, no MMPose required).

Mode Pose Model Speed
performance RTMW-x-l 384x288 Slower, most accurate
balanced RTMW-x-l 256x192 Default
lightweight RTMW-l-m 256x192 Fastest

Other Pose Models

MediaPipe

Google's MediaPipe PoseLandmarker — 33 landmarks with full-body coverage. Uses the heavy model variant (most accurate). Auto-downloaded on first use.

YOLO Pose

Ultralytics YOLOv8-Pose — 17 COCO keypoints, fast single-shot detection.

HRNet / RTMPose (MMPose)

HRNet-W48 and RTMPose-m via the OpenMMLab MMPose framework — 17 COCO keypoints.

  • MMPose: github.com/open-mmlab/mmpose
  • HRNet: Sun et al., Deep High-Resolution Representation Learning for Visual Recognition, TPAMI 2019
  • RTMPose: Jiang et al., RTMPose: Real-Time Multi-Person Pose Estimation based on MMPose, 2023

Tutorials & Examples

Basic Pipeline

The core workflow extracts pose landmarks from video, preprocesses the signal, computes joint kinematics, detects gait events, and derives spatio-temporal parameters.

from myogait import extract, normalize, compute_angles, detect_events
from myogait import segment_cycles, analyze_gait

# 1. Extract landmarks from video
data = extract("walking_video.mp4", model="mediapipe")

# 2. Preprocess: filter noise, handle gaps
data = normalize(data, filters=["butterworth"])

# 3. Compute joint angles (sagittal + frontal if depth available)
data = compute_angles(data)

# 4. Detect gait events (heel strikes, toe offs)
data = detect_events(data, method="gk_bike")  # uses gaitkit Bayesian detector

# 5. Segment into gait cycles
cycles = segment_cycles(data)

# 6. Compute spatio-temporal parameters
stats = analyze_gait(data, cycles)
print(f"Cadence: {stats['spatiotemporal']['cadence_steps_per_min']:.1f} steps/min")
print(f"Walking speed: {stats['walking_speed']['speed_mean']:.2f} m/s")

Clinical Scores

Compute standardized gait quality indices used in clinical gait analysis laboratories worldwide.

from myogait import gait_profile_score_2d, sagittal_deviation_index
from myogait import gait_variable_scores, movement_analysis_profile

# GPS-2D: overall gait quality score (RMS deviation from normative)
gps = gait_profile_score_2d(cycles)
print(f"GPS-2D: {gps['gps_2d_overall']:.1f}")

# SDI: Sagittal Deviation Index (0-120, 100 = normal)
# Note: This is a z-score based index, NOT the GDI (Schwartz & Rozumalski 2008).
sdi = sagittal_deviation_index(cycles)
print(f"SDI: {sdi['gdi_2d_overall']:.1f}")

# Per-joint deviation scores
gvs = gait_variable_scores(cycles)
for side in ("left", "right"):
    for joint, score in gvs[side].items():
        print(f"  {side} {joint}: {score:.1f}")

Data Quality Assessment

Evaluate landmark detection confidence and flag outliers before running the analysis pipeline.

from myogait import confidence_filter, detect_outliers, data_quality_score

# Filter low-confidence landmarks
data = confidence_filter(data, threshold=0.3)

# Detect and interpolate outliers
data = detect_outliers(data, z_thresh=3.0)

# Quality report
quality = data_quality_score(data)
print(f"Quality score: {quality['score']}/100")

Normative Comparison

Compare patient kinematics against published normative reference bands (Perry & Burnfield).

from myogait import plot_normative_comparison, get_normative_band

# Plot patient vs normative bands (Perry & Burnfield)
fig = plot_normative_comparison(data, cycles, plane="both")
fig.savefig("normative_comparison.png", dpi=150)

# Access raw normative data
band = get_normative_band("hip", stratum="adult")
mean, lower, upper = band["mean"], band["lower"], band["upper"]

Event Detection with gaitkit

myogait integrates multiple gait event detection algorithms, including Bayesian and ensemble methods from the gaitkit library.

from myogait import detect_events, event_consensus, list_event_methods

# List all available methods
print(list_event_methods())
# ['zeni', 'velocity', 'crossing', 'oconnor',
#  'gk_bike', 'gk_zeni', 'gk_ensemble', ...]

# Single method (gaitkit Bayesian BIS -- best F1 score)
data = detect_events(data, method="gk_bike")

# Consensus: vote across multiple detectors
data = event_consensus(data, methods=["gk_bike", "gk_zeni", "gk_oconnor"])

Export to OpenSim / Pose2Sim

Export landmarks and kinematics in formats compatible with OpenSim musculoskeletal modeling and Pose2Sim multi-camera triangulation.

from myogait import export_trc, export_mot, export_openpose_json
from myogait import export_opensim_scale_setup, export_ik_setup

# OpenSim .trc markers file (with height-based unit conversion)
export_trc(data, "markers.trc", opensim_model="gait2392")

# OpenSim .mot kinematics
export_mot(data, "kinematics.mot")

# OpenSim Scale Tool setup XML
export_opensim_scale_setup(data, "scale_setup.xml", model_file="gait2392.osim")

# OpenSim IK setup XML
export_ik_setup("markers.trc", "ik_setup.xml", model_file="scaled_model.osim")

# Pose2Sim: export OpenPose-format JSON for triangulation
export_openpose_json(data, "./openpose_output/", model="BODY_25")

Video Overlay & Visualization

Generate annotated videos, anonymized stick-figure animations, and publication-ready dashboards.

from myogait import render_skeleton_video, render_stickfigure_animation
from myogait import plot_summary, plot_gvs_profile

# Skeleton overlay on original video
render_skeleton_video("video.mp4", data, "overlay.mp4", show_angles=True)

# Anonymized stick figure GIF
render_stickfigure_animation(data, "stickfigure.gif")

# Summary dashboard
fig = plot_summary(data, cycles, stats)
fig.savefig("dashboard.png", dpi=150)

# MAP barplot (Movement Analysis Profile)
fig = plot_gvs_profile(gvs)
fig.savefig("gvs_profile.png", dpi=150)

PDF Report

Generate a multi-page clinical report with kinematic plots, spatio-temporal tables, and normative comparisons.

from myogait import generate_report

# Generate clinical PDF report (French or English)
generate_report(data, cycles, stats, "rapport_marche.pdf", language="fr")

Multiple Export Formats

Export analysis results to a variety of tabular and structured formats for downstream processing and archival.

from myogait import export_csv, export_excel, to_dataframe, export_summary_json

# CSV files (one per data type)
export_csv(data, "./csv_output/", cycles, stats)

# Excel workbook
export_excel(data, "gait_analysis.xlsx", cycles, stats)

# Pandas DataFrame for custom analysis
df = to_dataframe(data, what="angles")
print(df.head())

# Compact summary JSON
export_summary_json(data, cycles, stats, "summary.json")

CLI Usage

Run the full pipeline on a video:

myogait run video.mp4                                    # MediaPipe (default)
myogait run video.mp4 -m sapiens-quick                   # Sapiens 0.3B
myogait run video.mp4 -m sapiens-top --with-depth        # Sapiens 1B + depth
myogait run video.mp4 -m vitpose                         # ViTPose
myogait run video.mp4 -m rtmw                            # RTMW 133 keypoints

Extract landmarks only:

myogait extract video.mp4 -m sapiens-top --with-depth --with-seg

Analyze previously extracted results:

myogait analyze result.json --csv --pdf

Batch process multiple videos:

myogait batch *.mp4 -o results/

Download models:

myogait download --list                 # list all available models
myogait download sapiens-0.3b           # Sapiens pose 0.3B
myogait download sapiens-depth-1b       # Sapiens depth 1B
myogait download sapiens-seg-0.6b       # Sapiens seg 0.6B

Inspect a result file:

myogait info result.json

API Reference

All functions operate on a single data dict that flows through the pipeline.

Function Description
Core pipeline
extract(video, model, experimental=None) Extract pose landmarks from video
normalize(data, filters) Filter and normalize landmark trajectories
compute_angles(data) Compute sagittal joint angles
compute_frontal_angles(data) Compute frontal plane angles (requires depth)
detect_events(data, method) Detect gait events (15 methods incl. gaitkit)
event_consensus(data, methods) Multi-method voting for robust event detection
segment_cycles(data) Segment into individual gait cycles
analyze_gait(data, cycles) Compute spatio-temporal parameters
run_single_trial_vicon_benchmark(data, trial_dir) Experimental VICON alignment + metrics (single trial)
run_single_pair_benchmark(video, trial_dir, output_dir, benchmark_config=None) Experimental benchmark grid runner (single video + single VICON trial)
Clinical scores
gait_profile_score_2d(cycles) GPS-2D: overall gait deviation (sagittal + frontal)
sagittal_deviation_index(cycles) SDI (Sagittal Deviation Index): z-score based 0-120 index (100 = normal). Not the GDI.
gait_variable_scores(cycles) Per-joint deviation vs normative
movement_analysis_profile(cycles) MAP barplot data
Quality
confidence_filter(data) Remove low-confidence landmarks
detect_outliers(data) Detect and interpolate spikes
data_quality_score(data) Composite quality score 0-100
fill_gaps(data) Interpolate missing landmark gaps
Analysis
walking_speed(data, cycles) Estimated walking speed
step_length(data, cycles) Step/stride length estimation
stride_variability(data, cycles) CV of spatio-temporal parameters
arm_swing_analysis(data, cycles) Arm swing amplitude and asymmetry
segment_lengths(data) Anthropometric segment lengths
detect_pathologies(data, cycles) Pattern detection (equinus, antalgic, etc.)
Visualization
plot_summary(data, cycles, stats) Summary dashboard
plot_normative_comparison(data, cycles) Patient vs normative bands
plot_gvs_profile(gvs) Movement Analysis Profile barplot
render_skeleton_video(video, data, out) Skeleton overlay on video
render_stickfigure_animation(data, out) Anonymized stick figure GIF
Export
export_csv(data, dir, cycles, stats) CSV files
export_mot(data, path) OpenSim .mot kinematics
export_trc(data, path) OpenSim .trc markers
export_openpose_json(data, dir) OpenPose JSON for Pose2Sim
export_opensim_scale_setup(data, path) OpenSim Scale Tool XML
export_ik_setup(trc, path) OpenSim IK setup XML
to_dataframe(data, what) Pandas DataFrame
generate_report(data, cycles, stats, path) Multi-page clinical PDF report
validate_biomechanical(data, cycles) Validate against physiological ranges

Configuration

myogait supports YAML and JSON pipeline configuration files:

from myogait import load_config, save_config

config = load_config("pipeline.yaml")
config["filter"]["method"] = "butterworth"
config["filter"]["cutoff"] = 6.0
save_config(config, "pipeline_updated.yaml")

Experimental degradation can also be set in config (disabled by default):

extract:
  model: mediapipe
  experimental:
    enabled: false
    target_fps: null
    downscale: 1.0
    contrast: 1.0
    aspect_ratio: 1.0
    perspective_x: 0.0
    perspective_y: 0.0

JSON Output Format

When using Sapiens with depth and segmentation:

{
  "extraction": {
    "model": "sapiens-top",
    "depth_model": "sapiens-depth-1b",
    "seg_model": "sapiens-seg-1b",
    "auxiliary_format": "goliath308",
    "seg_classes": ["Background", "Apparel", "Face_Neck", "..."]
  },
  "frames": [
    {
      "frame_idx": 0,
      "landmarks": { "NOSE": {"x": 0.52, "y": 0.31, "visibility": 0.95} },
      "goliath308": [[0.52, 0.31, 0.95], "..."],
      "landmark_depths": { "NOSE": 0.73, "LEFT_HIP": 0.45, "..." : 0.0 },
      "landmark_body_parts": { "NOSE": "Face_Neck", "LEFT_HIP": "Left_Upper_Leg" }
    }
  ]
}

Acknowledgments

myogait is developed at the Institut de Myologie (Paris, France) by the PhysioEvalLab / IDMDataHub team.

This work is supported by:

Institut de Myologie   AFM-Téléthon   Fondation Myologie   Téléthon

Citation

If you use myogait in your research, please cite:

@software{myogait,
  author = {Fer, Frederic},
  title = {myogait: Markerless video-based gait analysis toolkit},
  year = {2025},
  institution = {Institut de Myologie, Paris, France},
  publisher = {GitHub},
  url = {https://github.com/IDMDataHub/myogait}
}

A peer-reviewed publication is in preparation.

License

This project is licensed under the MIT License. See LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

myogait-0.4.0.tar.gz (281.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

myogait-0.4.0-py3-none-any.whl (202.9 kB view details)

Uploaded Python 3

File details

Details for the file myogait-0.4.0.tar.gz.

File metadata

  • Download URL: myogait-0.4.0.tar.gz
  • Upload date:
  • Size: 281.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for myogait-0.4.0.tar.gz
Algorithm Hash digest
SHA256 62e4c962a030f066f8accddbe9062c02d0446daca91a5a9b68c19008f39003d8
MD5 5e91b3a57a67bfd1d8ef979c278fedf1
BLAKE2b-256 3a3f70c9755446236dffe14de1fbb02dd5f09c4c0c8ace3597aec493e8ab94a5

See more details on using hashes here.

Provenance

The following attestation bundles were made for myogait-0.4.0.tar.gz:

Publisher: release.yml on IDMDataHub/myogait

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file myogait-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: myogait-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 202.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for myogait-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f40c997d9a63260a19da743df6aab988be1b86934ca9c6f56d53431f1655fbd9
MD5 a35861b34e17ce963b022155d3fe676b
BLAKE2b-256 cfd164601103dbe497127e71cebff03507e9e429ec7803774534ad06aac7bc34

See more details on using hashes here.

Provenance

The following attestation bundles were made for myogait-0.4.0-py3-none-any.whl:

Publisher: release.yml on IDMDataHub/myogait

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page