Markerless video-based gait analysis toolkit
Project description
myogait
Markerless video-based gait analysis toolkit.
Author: Frederic Fer, Institut de Myologie (f.fer@institut-myologie.org)
Features
- Multi-model pose extraction — MediaPipe, YOLO, Sapiens (3 sizes), Sapiens 2 (4 sizes), ViTPose, RTMW, HRNet, RTMPose, OpenPose, AlphaPose, Detectron2
- Sapiens / Sapiens 2 depth estimation — monocular relative depth per landmark
- Sapiens / Sapiens 2 body segmentation — 28/29-class body-part labels per landmark
- Frame coherence scoring — per-frame biomechanical quality metric (z-score based)
- Butterworth, Savitzky-Golay, and Kalman filtering
- Joint angle computation (sagittal vertical axis, sagittal classic)
- 4 event detection methods (Zeni, crossing, velocity, O'Connor)
- Gait cycle segmentation and normalization
- Spatio-temporal analysis (cadence, stride time, stance%)
- Symmetry and variability indices
- Step length and walking speed estimation
- Harmonic ratio and regularity index
- Advanced pathology detection (Trendelenburg, spastic, steppage, crouch)
- Biomechanical validation against physiological ranges
- Publication-quality matplotlib plots
- Multi-page PDF clinical report
- Export to CSV, OpenSim (.mot/.trc), Excel
- YAML/JSON pipeline configuration
- CLI with
extract,run,analyze,batch,download, andinfocommands
Installation
pip install myogait
Install with a specific pose estimation backend:
pip install myogait[mediapipe] # MediaPipe (lightweight, CPU)
pip install myogait[yolo] # YOLO via Ultralytics
pip install myogait[sapiens] # Sapiens v1 (Meta AI) + Intel Arc GPU support
pip install myogait[sapiens2] # Sapiens 2 (Meta AI, ICLR 2026) — see "Sapiens 2" section below for one-shot setup
pip install myogait[vitpose] # ViTPose via HuggingFace Transformers
pip install myogait[rtmw] # RTMW 133-keypoint whole-body
pip install myogait[mmpose] # HRNet / RTMPose via MMPose
pip install myogait[alphapose] # AlphaPose FastPose
pip install myogait[detectron2] # Detectron2 Keypoint R-CNN
pip install myogait[all] # All backends
Supported Pose Estimation Backends
| Backend | Install | Notes |
|---|---|---|
| MediaPipe | pip install myogait[mediapipe] |
Fast, good for real-time. 33 landmarks |
| YOLOv8-Pose | pip install myogait[yolo] |
Fast, robust. 17 COCO keypoints |
| ViTPose | pip install myogait[vitpose] |
State-of-the-art accuracy |
| Sapiens | pip install myogait[sapiens] |
Meta model, depth estimation |
| Sapiens 2 | pip install myogait[sapiens2] |
Meta ICLR 2026, +4 mAP, 4K support |
| RTMPose/RTMLib | pip install myogait[rtmw] |
Real-time, ONNX optimized |
| MMPose | pip install myogait[mmpose] |
Academic reference, many models |
| OpenPose | Built-in (OpenCV DNN) | Historical baseline, bottom-up. 17 COCO keypoints |
| AlphaPose | pip install myogait[alphapose] |
Top-down baseline, biomechanics standard |
| Detectron2 | pip install myogait[detectron2] |
Meta academic baseline, Keypoint R-CNN |
GPU Support
All models support NVIDIA CUDA GPUs. Sapiens and ViTPose also support
Intel Arc / Xe GPUs (via intel-extension-for-pytorch).
| Model | CUDA | Intel Arc (XPU) | CPU |
|---|---|---|---|
| MediaPipe | — | — | yes |
| YOLO | yes | — | yes |
| Sapiens (pose, depth, seg) | yes | yes | yes |
| Sapiens 2 (pose, depth, seg) | yes | yes | yes |
| ViTPose | yes | yes | yes |
| RTMW | yes (onnxruntime) | — | yes |
| HRNet / RTMPose | yes | — | yes |
| OpenPose | yes (OpenCV DNN) | — | yes |
| AlphaPose | yes | yes | yes |
| Detectron2 | yes | — | yes |
Supported Pose Models
| Name | Keypoints | Format | Backend | Install |
|---|---|---|---|---|
mediapipe |
33 | MediaPipe | Google MediaPipe Tasks (heavy) | pip install myogait[mediapipe] |
yolo |
17 | COCO | Ultralytics YOLOv8-Pose | pip install myogait[yolo] |
sapiens-quick |
17 COCO + 308 Goliath | COCO | Meta Sapiens 0.3B (336M params) | pip install myogait[sapiens] |
sapiens-mid |
17 COCO + 308 Goliath | COCO | Meta Sapiens 0.6B (664M params) | pip install myogait[sapiens] |
sapiens-top |
17 COCO + 308 Goliath | COCO | Meta Sapiens 1B (1.1B params) | pip install myogait[sapiens] |
sapiens2-quick |
17 COCO + 308 Goliath | COCO | Meta Sapiens 2 0.4B (ICLR 2026) | pip install myogait[sapiens2] |
sapiens2-mid |
17 COCO + 308 Goliath | COCO | Meta Sapiens 2 0.8B | pip install myogait[sapiens2] |
sapiens2-top |
17 COCO + 308 Goliath | COCO | Meta Sapiens 2 1B | pip install myogait[sapiens2] |
sapiens2-ultra |
17 COCO + 308 Goliath | COCO | Meta Sapiens 2 5B | pip install myogait[sapiens2] |
vitpose |
17 | COCO | ViTPose-base (HuggingFace) | pip install myogait[vitpose] |
vitpose-large |
17 | COCO | ViTPose+-large (HuggingFace) | pip install myogait[vitpose] |
vitpose-huge |
17 | COCO | ViTPose+-huge (HuggingFace) | pip install myogait[vitpose] |
rtmw |
17 COCO + 133 whole-body | COCO | RTMW via rtmlib | pip install myogait[rtmw] |
hrnet |
17 | COCO | HRNet-W48 via MMPose | pip install myogait[mmpose] |
mmpose |
17 | COCO | RTMPose-m via MMPose | pip install myogait[mmpose] |
openpose |
17 | COCO | CMU OpenPose via OpenCV DNN | Built-in |
alphapose |
17 | COCO | AlphaPose FastPose (ResNet-50) | pip install myogait[alphapose] |
detectron2 |
17 | COCO | Keypoint R-CNN (R50-FPN, 3x) | pip install myogait[detectron2] |
Sapiens Auxiliary Models
In addition to pose, Sapiens provides depth estimation and body-part segmentation. These run alongside any Sapiens pose model to enrich per-landmark data.
Depth Estimation
Monocular relative depth. Per-landmark depth values (closer = higher)
are stored in each frame as landmark_depths.
| Size | HuggingFace repo |
|---|---|
| 0.3b | facebook/sapiens-depth-0.3b-torchscript |
| 0.6b | facebook/sapiens-depth-0.6b-torchscript |
| 1b | facebook/sapiens-depth-1b-torchscript |
| 2b | facebook/sapiens-depth-2b-torchscript |
Body-Part Segmentation
28-class segmentation (face, torso, arms, legs, hands, feet, clothing...).
Per-landmark body-part labels are stored in each frame as landmark_body_parts.
| Size | mIoU | HuggingFace repo |
|---|---|---|
| 0.3b | 76.7 | facebook/sapiens-seg-0.3b-torchscript |
| 0.6b | 77.8 | facebook/sapiens-seg-0.6b-torchscript |
| 1b | 79.9 | facebook/sapiens-seg-1b-torchscript |
Sapiens 2 (ICLR 2026)
Sapiens 2 improves over v1 with +4 mAP (pose), +24.3 mIoU (seg), and 45.6%
lower angular error (normals). Same Goliath 308 keypoint layout as v1.
Requires torch>=2.7 and safetensors.
| Size | HuggingFace pose repo | HuggingFace depth repo | HuggingFace seg repo |
|---|---|---|---|
| 0.4b | facebook/sapiens2-pose-0.4b |
facebook/sapiens2-depth-0.4b |
facebook/sapiens2-seg-0.4b |
| 0.8b | facebook/sapiens2-pose-0.8b |
facebook/sapiens2-depth-0.8b |
facebook/sapiens2-seg-0.8b |
| 1b | facebook/sapiens2-pose-1b |
facebook/sapiens2-depth-1b |
facebook/sapiens2-seg-1b |
| 5b | facebook/sapiens2-pose-5b |
facebook/sapiens2-depth-5b |
facebook/sapiens2-seg-5b |
Sapiens 2 segmentation uses 29 classes (adds Eyeglass at index 2).
Plug-and-play setup
Meta ships Sapiens 2 weights as .safetensors and the Python package that
rebuilds the model architecture lives only on GitHub (not PyPI).
myogait ships a one-shot helper that handles the bridge:
# 1) myogait + Sapiens 2 runtime deps
pip install myogait[sapiens2]
# 2) one-shot setup: installs Meta `sapiens` from GitHub, downloads the
# weights, traces a TorchScript .pt2 cached in ~/.myogait/models/.
# From now on inference no longer needs the Meta package.
myogait setup-sapiens2 --size 0.4b --cleanup-safetensors
Once that command finishes, mg.extract(..., model="sapiens2-quick")
loads the cached .pt2 directly — no Meta package required, no internet
on subsequent runs. Repeat for --size 0.8b, 1b, 5b if you want
the larger variants. The .pt2 is device-specific (constants get
baked at trace time), so the helper traces on the device myogait will
use at inference (CUDA > XPU > CPU, in that order).
If you prefer to keep the Meta sapiens package around (e.g. for
research code that imports it directly), drop the --uninstall-sapiens
flag — it stays installed but is not on the inference path anymore.
Usage
# Sapiens v1 — pose + depth + segmentation
myogait extract video.mp4 -m sapiens-quick --with-depth --with-seg
# Sapiens 2 — same API, better accuracy
myogait extract video.mp4 -m sapiens2-top --with-depth --with-seg
# Python
from myogait import extract
data = extract("video.mp4", model="sapiens2-top", with_depth=True, with_seg=True)
End-to-end demo (extract + overlay + side-by-side + angle plots)
python examples/sapiens2_compare_pipeline.py path/to/video.mp4 --out-dir out
Produces in out/:
<stem>_<model>.json— pivot data per model (mediapipe, sapiens-quick, sapiens2-quick), reused on subsequent runs<stem>_<model>.mp4— skeleton overlay per modelcompare_3way_slow.mp4— three panels side-by-side at half-speed with legends and a darkened background so the skeleton stands outcompare_angles.png— per-cycle hip / knee / ankle curves on left and right side, every cycle thin + each model's mean thick
The skeleton-on-video alignment honours data["frames"][i]["frame_idx"],
so the auto-cropped intro of long recordings is rendered correctly across
all models.
Experimental AIM Benchmark Input Degradation
myogait includes an experimental pre-extraction degradation layer for
robustness benchmarking in the AIM benchmark context only.
By default, it is disabled and applies no modification.
Python API:
from myogait import extract
data = extract(
"video.mp4",
model="mediapipe",
experimental={
"enabled": True,
"target_fps": 15.0, # frame-rate degradation
"downscale": 0.6, # spatial degradation
"contrast": 0.7, # contrast degradation
"aspect_ratio": 1.2, # non-square stretch
"perspective_x": 0.2, # side-like perspective skew
"perspective_y": 0.1, # forward/backward tilt skew
},
)
CLI:
myogait extract video.mp4 \
--exp-enable \
--exp-target-fps 15 \
--exp-downscale 0.6 \
--exp-contrast 0.7 \
--exp-aspect-ratio 1.2 \
--exp-perspective-x 0.2 \
--exp-perspective-y 0.1
Experimental VICON Alignment (Single Video)
For AIM benchmark workflows, you can align one myogait result with one VICON trial and attach ground-truth comparison metrics to the JSON. This is experimental and disabled by default in the standard pipeline.
from myogait import run_single_trial_vicon_benchmark
data = run_single_trial_vicon_benchmark(
data, # myogait result dict
trial_dir="/path/to/trial_01_1", # contains *.mat files
vicon_fps=200.0,
max_lag_seconds=10.0,
)
# Results in data["experimental"]["vicon_benchmark"]
Experimental Single-Pair Benchmark Runner (AIM Only)
You can run a full benchmark grid on one (video, vicon_trial) pair and
generate:
- one JSON per run (
<output_dir>/runs/*.json) - one CSV summary (
<output_dir>/benchmark_summary.csv) - one manifest (
<output_dir>/benchmark_manifest.json)
from myogait import run_single_pair_benchmark
manifest = run_single_pair_benchmark(
video_path="video.mp4",
vicon_trial_dir="/path/to/trial_01_1",
output_dir="./benchmark_out",
benchmark_config={
"models": ["mediapipe", "yolo"], # or "all"
"event_methods": "all", # or ["zeni", "gk_zeni", ...]
"normalization_variants": [
{"name": "none", "enabled": False, "kwargs": {}},
{"name": "butterworth", "enabled": True, "kwargs": {"filters": ["butterworth"]}},
],
"degradation_variants": [
{"name": "none", "experimental": {"enabled": False}},
{"name": "lowres", "experimental": {"enabled": True, "downscale": 0.7, "target_fps": 15.0}},
],
"continue_on_error": True,
},
)
print(manifest["summary_csv"])
This runner is experimental and intended only for AIM benchmark workflows.
References
- Sapiens v1: Rawal et al., Sapiens: Foundation for Human Vision Models, ECCV 2024 — arXiv:2408.12569 — Code — Models
- Sapiens 2: Rawal et al., Sapiens 2: A Human Foundation Model, ICLR 2026 — arXiv:2604.21681 — Code — Models
ViTPose
Vision Transformer for pose estimation (NeurIPS 2022). Top-down architecture with RT-DETR person detector. Fully pip-installable via HuggingFace Transformers.
| Variant | Size | HuggingFace repo |
|---|---|---|
vitpose (base) |
90M | usyd-community/vitpose-base-simple |
vitpose-large |
400M | usyd-community/vitpose-plus-large |
vitpose-huge |
900M | usyd-community/vitpose-plus-huge |
- Paper: Xu et al., ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation, NeurIPS 2022 — arXiv:2204.12484
- Paper: Xu et al., ViTPose++: Vision Transformer for Generic Body Pose Estimation, TPAMI 2024 — arXiv:2212.04246
RTMW (Whole-Body 133 Keypoints)
Real-Time Multi-person Whole-body estimation: body (17) + feet (6) + face (68) + hands (42) = 133 keypoints. Uses rtmlib (lightweight ONNX inference, no MMPose required).
| Mode | Pose Model | Speed |
|---|---|---|
performance |
RTMW-x-l 384x288 | Slower, most accurate |
balanced |
RTMW-x-l 256x192 | Default |
lightweight |
RTMW-l-m 256x192 | Fastest |
- Paper: Jiang et al., RTMPose: Real-Time Multi-Person Pose Estimation based on MMPose, 2023
- RTMW configs: MMPose cocktail14
- rtmlib: github.com/Tau-J/rtmlib
Other Pose Models
MediaPipe
Google's MediaPipe PoseLandmarker — 33 landmarks with full-body coverage. Uses the heavy model variant (most accurate). Auto-downloaded on first use.
YOLO Pose
Ultralytics YOLOv8-Pose — 17 COCO keypoints, fast single-shot detection.
- Docs: docs.ultralytics.com
HRNet / RTMPose (MMPose)
HRNet-W48 and RTMPose-m via the OpenMMLab MMPose framework — 17 COCO keypoints.
- MMPose: github.com/open-mmlab/mmpose
- HRNet: Sun et al., Deep High-Resolution Representation Learning for Visual Recognition, TPAMI 2019
- RTMPose: Jiang et al., RTMPose: Real-Time Multi-Person Pose Estimation based on MMPose, 2023
OpenPose
CMU OpenPose — historical bottom-up baseline, 17 COCO keypoints via OpenCV DNN. No extra dependency needed (uses OpenCV DNN, included in core install). Model auto-downloaded on first use (~200 MB).
- Paper: Cao et al., Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields, CVPR 2017
- Code: github.com/CMU-Perceptual-Computing-Lab/openpose
AlphaPose
AlphaPose FastPose (ResNet-50) — top-down baseline widely used in biomechanics. Uses YOLO person detection + heatmap-based pose estimation. Supports both the official AlphaPose library and a fallback PyTorch reimplementation.
- Paper: Fang et al., AlphaPose: Whole-Body Regional Multi-Person Pose Estimation and Tracking in Real-Time, TPAMI 2022
- Code: github.com/MVIG-SJTU/AlphaPose
Detectron2 / Keypoint R-CNN
Meta's Detectron2 with Keypoint R-CNN (ResNet-50-FPN, 3x schedule) — academic baseline. Built-in person detection and 17 COCO keypoint estimation in a single pass.
- Paper: Wu et al., Detectron2, 2019 ; He et al., Mask R-CNN, ICCV 2017
- Code: github.com/facebookresearch/detectron2
Tutorials & Examples
Basic Pipeline
The core workflow extracts pose landmarks from video, preprocesses the signal, computes joint kinematics, detects gait events, and derives spatio-temporal parameters.
from myogait import extract, normalize, compute_angles, detect_events
from myogait import segment_cycles, analyze_gait
# 1. Extract landmarks from video
data = extract("walking_video.mp4", model="mediapipe")
# 2. Preprocess: filter noise, handle gaps
data = normalize(data, filters=["butterworth"])
# 3. Compute joint angles (sagittal + frontal if depth available)
data = compute_angles(data)
# 4. Detect gait events (heel strikes, toe offs)
data = detect_events(data, method="gk_bike") # uses gaitkit Bayesian detector
# 5. Segment into gait cycles
cycles = segment_cycles(data)
# 6. Compute spatio-temporal parameters
stats = analyze_gait(data, cycles)
print(f"Cadence: {stats['spatiotemporal']['cadence_steps_per_min']:.1f} steps/min")
print(f"Walking speed: {stats['walking_speed']['speed_mean']:.2f} m/s")
Clinical Scores
Compute standardized gait quality indices used in clinical gait analysis laboratories worldwide.
from myogait import gait_profile_score_2d, sagittal_deviation_index
from myogait import gait_variable_scores, movement_analysis_profile
# GPS-2D: overall gait quality score (RMS deviation from normative)
gps = gait_profile_score_2d(cycles)
print(f"GPS-2D: {gps['gps_2d_overall']:.1f}")
# SDI: Sagittal Deviation Index (0-120, 100 = normal)
# Note: This is a z-score based index, NOT the GDI (Schwartz & Rozumalski 2008).
sdi = sagittal_deviation_index(cycles)
print(f"SDI: {sdi['gdi_2d_overall']:.1f}")
# Per-joint deviation scores
gvs = gait_variable_scores(cycles)
for side in ("left", "right"):
for joint, score in gvs[side].items():
print(f" {side} {joint}: {score:.1f}")
Data Quality Assessment
Evaluate landmark detection confidence and flag outliers before running the analysis pipeline.
from myogait import confidence_filter, detect_outliers, data_quality_score
from myogait import frame_coherence_score
# Filter low-confidence landmarks
data = confidence_filter(data, threshold=0.3)
# Detect and interpolate outliers
data = detect_outliers(data, z_thresh=3.0)
# Quality report
quality = data_quality_score(data)
print(f"Quality score: {quality['score']}/100")
# Per-frame coherence scoring (z-score based, adapts to any FPS)
data = frame_coherence_score(data)
print(f"Mean coherence: {data['coherence_summary']['mean']:.3f}")
print(f"Low frames: {data['coherence_summary']['low_coherence_frames']}")
Normative Comparison
Compare patient kinematics against published normative reference bands (Perry & Burnfield).
from myogait import plot_normative_comparison, get_normative_band
# Plot patient vs normative bands (Perry & Burnfield)
fig = plot_normative_comparison(data, cycles, plane="both")
fig.savefig("normative_comparison.png", dpi=150)
# Access raw normative data
band = get_normative_band("hip", stratum="adult")
mean, lower, upper = band["mean"], band["lower"], band["upper"]
Event Detection with gaitkit
myogait integrates multiple gait event detection algorithms, including Bayesian and ensemble methods from the gaitkit library.
from myogait import detect_events, event_consensus, list_event_methods
# List all available methods
print(list_event_methods())
# ['zeni', 'velocity', 'crossing', 'oconnor',
# 'gk_bike', 'gk_zeni', 'gk_ensemble', ...]
# Single method (gaitkit Bayesian BIS -- best F1 score)
data = detect_events(data, method="gk_bike")
# Consensus: vote across multiple detectors
data = event_consensus(data, methods=["gk_bike", "gk_zeni", "gk_oconnor"])
Export to OpenSim / Pose2Sim
Export landmarks and kinematics in formats compatible with OpenSim musculoskeletal modeling and Pose2Sim multi-camera triangulation.
from myogait import export_trc, export_mot, export_openpose_json
from myogait import export_opensim_scale_setup, export_ik_setup
# OpenSim .trc markers file (with height-based unit conversion)
export_trc(data, "markers.trc", opensim_model="gait2392")
# OpenSim .mot kinematics
export_mot(data, "kinematics.mot")
# OpenSim Scale Tool setup XML
export_opensim_scale_setup(data, "scale_setup.xml", model_file="gait2392.osim")
# OpenSim IK setup XML
export_ik_setup("markers.trc", "ik_setup.xml", model_file="scaled_model.osim")
# Pose2Sim: export OpenPose-format JSON for triangulation
export_openpose_json(data, "./openpose_output/", model="BODY_25")
Video Overlay & Visualization
Generate annotated videos, anonymized stick-figure animations, and publication-ready dashboards.
from myogait import render_skeleton_video, render_stickfigure_animation
from myogait import plot_summary, plot_gvs_profile
# Skeleton overlay on original video
render_skeleton_video("video.mp4", data, "overlay.mp4", show_angles=True)
# Anonymized stick figure GIF
render_stickfigure_animation(data, "stickfigure.gif")
# Summary dashboard
fig = plot_summary(data, cycles, stats)
fig.savefig("dashboard.png", dpi=150)
# MAP barplot (Movement Analysis Profile)
fig = plot_gvs_profile(gvs)
fig.savefig("gvs_profile.png", dpi=150)
PDF Report
Generate a multi-page clinical report with kinematic plots, spatio-temporal tables, and normative comparisons.
from myogait import generate_report
# Generate clinical PDF report (French or English)
generate_report(data, cycles, stats, "rapport_marche.pdf", language="fr")
Multiple Export Formats
Export analysis results to a variety of tabular and structured formats for downstream processing and archival.
from myogait import export_csv, export_excel, to_dataframe, export_summary_json
# CSV files (one per data type)
export_csv(data, "./csv_output/", cycles, stats)
# Excel workbook
export_excel(data, "gait_analysis.xlsx", cycles, stats)
# Pandas DataFrame for custom analysis
df = to_dataframe(data, what="angles")
print(df.head())
# Compact summary JSON
export_summary_json(data, cycles, stats, "summary.json")
CLI Usage
Run the full pipeline on a video:
myogait run video.mp4 # MediaPipe (default)
myogait run video.mp4 -m sapiens-quick # Sapiens 0.3B
myogait run video.mp4 -m sapiens-top --with-depth # Sapiens 1B + depth
myogait run video.mp4 -m sapiens2-top --with-depth # Sapiens 2 1B + depth
myogait run video.mp4 -m sapiens2-ultra # Sapiens 2 5B
myogait run video.mp4 -m vitpose # ViTPose
myogait run video.mp4 -m rtmw # RTMW 133 keypoints
Extract landmarks only:
myogait extract video.mp4 -m sapiens-top --with-depth --with-seg
Analyze previously extracted results:
myogait analyze result.json --csv --pdf
Optional lateral-label correction:
from myogait import correct_lateral_labels
# Default: conservative global inversion correction
data = correct_lateral_labels(data)
# Opt-in: recover isolated ankle/knee/heel swaps during crossings
data = correct_lateral_labels(data, mode="partial")
Batch process multiple videos:
myogait batch *.mp4 -o results/
Download models:
myogait download --list # list all available models
myogait download sapiens-0.3b # Sapiens v1 pose 0.3B
myogait download sapiens-depth-1b # Sapiens v1 depth 1B
myogait download sapiens2-1b # Sapiens 2 pose 1B
myogait download sapiens2-ultra # Sapiens 2 pose 5B
Inspect a result file:
myogait info result.json
API Reference
All functions operate on a single data dict that flows through the pipeline.
| Function | Description |
|---|---|
| Core pipeline | |
extract(video, model, experimental=None) |
Extract pose landmarks from video |
normalize(data, filters) |
Filter and normalize landmark trajectories |
compute_angles(data) |
Compute sagittal joint angles |
compute_frontal_angles(data) |
Compute frontal plane angles (requires depth) |
detect_events(data, method) |
Detect gait events (15 methods incl. gaitkit) |
event_consensus(data, methods) |
Multi-method voting for robust event detection |
segment_cycles(data) |
Segment into individual gait cycles |
analyze_gait(data, cycles) |
Compute spatio-temporal parameters |
run_single_trial_vicon_benchmark(data, trial_dir) |
Experimental VICON alignment + metrics (single trial) |
run_single_pair_benchmark(video, trial_dir, output_dir, benchmark_config=None) |
Experimental benchmark grid runner (single video + single VICON trial) |
| Clinical scores | |
gait_profile_score_2d(cycles) |
GPS-2D: overall gait deviation (sagittal + frontal) |
sagittal_deviation_index(cycles) |
SDI (Sagittal Deviation Index): z-score based 0-120 index (100 = normal). Not the GDI. |
gait_variable_scores(cycles) |
Per-joint deviation vs normative |
movement_analysis_profile(cycles) |
MAP barplot data |
| Quality | |
confidence_filter(data) |
Remove low-confidence landmarks |
detect_outliers(data) |
Detect and interpolate spikes |
data_quality_score(data) |
Composite quality score 0-100 |
frame_coherence_score(data) |
Per-frame biomechanical coherence [0,1] (z-score based) |
fill_gaps(data) |
Interpolate missing landmark gaps |
| Analysis | |
walking_speed(data, cycles) |
Estimated walking speed |
step_length(data, cycles) |
Step/stride length estimation |
stride_variability(data, cycles) |
CV of spatio-temporal parameters |
arm_swing_analysis(data, cycles) |
Arm swing amplitude and asymmetry |
segment_lengths(data) |
Anthropometric segment lengths |
detect_pathologies(data, cycles) |
Pattern detection (equinus, antalgic, etc.) |
| Visualization | |
plot_summary(data, cycles, stats) |
Summary dashboard |
plot_normative_comparison(data, cycles) |
Patient vs normative bands |
plot_gvs_profile(gvs) |
Movement Analysis Profile barplot |
render_skeleton_video(video, data, out) |
Skeleton overlay on video |
render_stickfigure_animation(data, out) |
Anonymized stick figure GIF |
| Export | |
export_csv(data, dir, cycles, stats) |
CSV files |
export_mot(data, path) |
OpenSim .mot kinematics |
export_trc(data, path) |
OpenSim .trc markers |
load_c3d(path) |
Load VICON C3D file into myogait data dict |
export_openpose_json(data, dir) |
OpenPose JSON for Pose2Sim |
export_opensim_scale_setup(data, path) |
OpenSim Scale Tool XML |
export_ik_setup(trc, path) |
OpenSim IK setup XML |
to_dataframe(data, what) |
Pandas DataFrame |
generate_report(data, cycles, stats, path) |
Multi-page clinical PDF report |
validate_biomechanical(data, cycles) |
Validate against physiological ranges |
Configuration
myogait supports YAML and JSON pipeline configuration files:
from myogait import load_config, save_config
config = load_config("pipeline.yaml")
config["filter"]["method"] = "butterworth"
config["filter"]["cutoff"] = 6.0
save_config(config, "pipeline_updated.yaml")
Experimental degradation can also be set in config (disabled by default):
extract:
model: mediapipe
experimental:
enabled: false
target_fps: null
downscale: 1.0
contrast: 1.0
aspect_ratio: 1.0
perspective_x: 0.0
perspective_y: 0.0
JSON Output Format
When using Sapiens (v1 or v2) with depth and segmentation:
{
"extraction": {
"model": "sapiens2-top",
"depth_model": "sapiens2-depth-1b",
"seg_model": "sapiens2-seg-1b",
"auxiliary_format": "goliath308",
"seg_classes": ["Background", "Apparel", "Eyeglass", "Face_Neck", "..."]
},
"frames": [
{
"frame_idx": 0,
"landmarks": { "NOSE": {"x": 0.52, "y": 0.31, "visibility": 0.95} },
"goliath308": [[0.52, 0.31, 0.95], "..."],
"landmark_depths": { "NOSE": 0.73, "LEFT_HIP": 0.45, "..." : 0.0 },
"landmark_body_parts": { "NOSE": "Face_Neck", "LEFT_HIP": "Left_Upper_Leg" }
}
]
}
Acknowledgments
myogait is developed at the Institut de Myologie (Paris, France) by the PhysioEvalLab / IDMDataHub team.
This work is supported by:
- AFM-Téléthon — French Muscular Dystrophy Association
- Fondation Myologie — Research foundation for muscle diseases
- Téléthon — Annual fundraising event for rare disease research
Citation
If you use myogait in your research, please cite:
@software{myogait,
author = {Fer, Frederic},
title = {myogait: Markerless video-based gait analysis toolkit},
year = {2025},
institution = {Institut de Myologie, Paris, France},
publisher = {GitHub},
url = {https://github.com/IDMDataHub/myogait}
}
A peer-reviewed publication is in preparation.
License
This project is licensed under the MIT License. See LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file myogait-0.6.0.tar.gz.
File metadata
- Download URL: myogait-0.6.0.tar.gz
- Upload date:
- Size: 376.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e88084bd3fa0c78d86d016066bfebab0b9e1145f32ca7d0c125e6e9bf476c4a5
|
|
| MD5 |
9972799dd4986f30a103d90f9165575b
|
|
| BLAKE2b-256 |
6ef9f02aaeb03ba5d19e2577488f30ebe0f487336c06904c9d7eab618378e3c4
|
Provenance
The following attestation bundles were made for myogait-0.6.0.tar.gz:
Publisher:
release.yml on IDMDataHub/myogait
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
myogait-0.6.0.tar.gz -
Subject digest:
e88084bd3fa0c78d86d016066bfebab0b9e1145f32ca7d0c125e6e9bf476c4a5 - Sigstore transparency entry: 1449188380
- Sigstore integration time:
-
Permalink:
IDMDataHub/myogait@f54e594741dc6e04eb64d8ed02d112944883b84b -
Branch / Tag:
refs/tags/v0.6.0 - Owner: https://github.com/IDMDataHub
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@f54e594741dc6e04eb64d8ed02d112944883b84b -
Trigger Event:
push
-
Statement type:
File details
Details for the file myogait-0.6.0-py3-none-any.whl.
File metadata
- Download URL: myogait-0.6.0-py3-none-any.whl
- Upload date:
- Size: 263.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c3a226829ec20ca7f709116e9bc07dcfdc71db69a5454635375901fd80a8fb09
|
|
| MD5 |
cee313e1842e745aae3ad4e2ab002fe4
|
|
| BLAKE2b-256 |
e92701461a93ca49275f6d46cdf5eec186a606e6cfcb68550c56365f751fc303
|
Provenance
The following attestation bundles were made for myogait-0.6.0-py3-none-any.whl:
Publisher:
release.yml on IDMDataHub/myogait
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
myogait-0.6.0-py3-none-any.whl -
Subject digest:
c3a226829ec20ca7f709116e9bc07dcfdc71db69a5454635375901fd80a8fb09 - Sigstore transparency entry: 1449188389
- Sigstore integration time:
-
Permalink:
IDMDataHub/myogait@f54e594741dc6e04eb64d8ed02d112944883b84b -
Branch / Tag:
refs/tags/v0.6.0 - Owner: https://github.com/IDMDataHub
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@f54e594741dc6e04eb64d8ed02d112944883b84b -
Trigger Event:
push
-
Statement type: