Skip to main content

Align MTB GoPro runs and transfer markers using computer vision

Project description

MTB Video Sync MVP — Iterative Ticket Flow

PyPI version CI Python 3.11+ License: MIT

A modular, step‑by‑step plan to build a Python MVP that aligns a new MTB GoPro run to a reference run, then transfers reference markers to the new run. Designed for use with Claude Code or the OpenAI CLI.

Quickstart

Install from PyPI:

pip install mtbsync

The package includes helper scripts and documentation:

  • Helper scripts: examples/visual_diagnostic.sh, diagnose_alignment.sh, resync_strict.sh, dashboard_cleanup.sh
  • Documentation: QUICK_VISUAL_DIAGNOSTIC.md, examples/CI_CD_INTEGRATION.md, examples/README.md

Basic usage:

# Index reference video
mtbsync index --reference ref.mp4 --fps 3 --out ref_index.npz

# Sync new video to reference
mtbsync sync --reference ref.mp4 --new new.mp4 --index ref_index.npz --out pairs.csv --fps 3

# Transfer markers
mtbsync transfer-markers --ref-markers ref_markers.csv --timewarp-json timewarp.json --out new_markers.csv

# Export to NLE formats
mtbsync export-markers new_markers.csv --preset fcpxml
mtbsync export-markers new_markers.csv --preset premiere
mtbsync export-markers new_markers.csv --preset resolve-edl

Project Brief

Goal: Build a Python MVP that aligns a new MTB GoPro run to a reference run of the same trail, then transfers reference markers to the new run. Use computer vision (CV) with optional GPS. Export markers for NLEs (EDL/CSV).

Core pipeline

  1. Ingest: MP4 + optional GPX/FIT.
  2. Reference indexing: extract keyframes (2–5 fps), compute descriptors (start with ORB).
  3. Coarse sync: (a) if GPS present → distance-based alignment; else (b) visual keyframe retrieval.
  4. Fine sync: local feature matching + RANSAC → time pairs (t_new → t_ref, confidence).
  5. Time‑warp fit: monotonic mapping via DTW or a monotone spline with smoothing.
  6. Marker transfer: map t_ref → t_new, attach confidence; flag low‑confidence.
  7. Export: CSV + CMX3600 EDL; optional FCPXML later.
  8. Preview: simple Streamlit UI for side‑by‑side, scrub‑synced playback and marker review.

Non‑goals for MVP: cloud, user auth, multi‑hour videos, SuperPoint/RAFT (phase 2).

Tech: Python 3.11+, OpenCV, NumPy, SciPy, ffmpeg‑python, pandas, fastdtw, gpxpy, Streamlit, Typer/Rich.


Repository Layout

mtb-sync/
  README.md
  pyproject.toml            # or requirements.txt
  src/
    mtbsync/__init__.py
    mtbsync/cli.py
    mtbsync/io/video.py
    mtbsync/io/gps.py
    mtbsync/features/keyframes.py
    mtbsync/features/descriptors.py
    mtbsync/match/retrieval.py
    mtbsync/match/local.py
    mtbsync/align/timewarp.py
    mtbsync/markers/schema.py
    mtbsync/markers/transfer.py
    mtbsync/export/csv_export.py
    mtbsync/export/edl_export.py
    mtbsync/ui/app.py
  tests/
    test_timewarp.py
    test_marker_transfer.py
    test_edl_export.py
  data/
    sample_reference.mp4
    sample_new.mp4
    sample_reference_markers.csv
    sample_reference.gpx

Tickets

Work through these tickets in order. Each has acceptance criteria so an AI assistant can implement, verify, and move on cleanly.

0) Repo Scaffold

Implement

  • Create the folder structure above.
  • Configure packaging: pip install -e . enables local dev.
  • Provide a pyproject.toml (or requirements.txt) with pinned deps.

Acceptance Criteria

  • Virtual env setup works.
  • mtbsync --help available after editable install.

1) CLI Surface

Implement

  • Subcommands:

    mtbsync index --reference ref.mp4 --fps 3 --out cache/ref_index.npz
    mtbsync sync  --reference ref.mp4 --new new.mp4 \
                  [--ref-gpx ref.gpx] [--new-gpx new.gpx] \
                  --index cache/ref_index.npz --out cache/pairs.csv
    mtbsync warp  --pairs cache/pairs.csv --out cache/warp.npz
    mtbsync transfer --reference-markers data/ref_markers.csv --warp cache/warp.npz \
                     --out out/new_markers.csv --review-threshold 0.6
    mtbsync export edl --markers out/new_markers.csv --out out/new_markers.edl \
                       [--reel 001] [--fps 29.97]
    mtbsync preview --reference ref.mp4 --new new.mp4 --markers out/new_markers.csv
    

Acceptance Criteria

  • Robust arg validation and concise summaries via rich.
  • No global state; each command is independently runnable.

2) Video IO & Keyframes

Implement

  • io/video.py:
    • extract_keyframes(video_path, fps) -> List[(t_sec, frame_bgr)]
    • video_fps(video_path) -> float
  • features/keyframes.py:
    • Resize frames to max dim 960 px (preserve aspect).

Acceptance Criteria

  • Deterministic timestamp sampling (handles VFR).
  • Errors handled gracefully (bad file, zero‑length, etc.).

3) Descriptors

Implement

  • features/descriptors.py with ORB (grayscale + optional CLAHE).
  • Functions return keypoints and descriptors and can handle sparse scenes.

Acceptance Criteria

  • Target ~800 keypoints per keyframe when available.
  • Save index with timestamps, (x,y,angle,scale), and descriptors (uint8) to .npz.

4) Retrieval (Coarse Visual Alignment)

Implement

  • match/retrieval.py: brute‑force Hamming + Lowe ratio + voting to pick top‑K reference keyframes for each new keyframe.

Acceptance Criteria

  • Output pairs_raw.csv with t_new, t_ref, score, n_inliers.
  • 5 min @ 3 fps runs in ~≤2 minutes on a laptop.

5) Local Refinement

Implement

  • match/local.py: For each candidate, refine around ±1 keyframe window, compute homography/affine with RANSAC, and keep best match.

Acceptance Criteria

  • Output pairs.csv with (t_new, t_ref, confidence).
  • Reject outliers by reprojection error; ≥70% valid pairs on sample data.

6) GPS Alignment (Optional but Implemented)

Implement

  • io/gps.py using gpxpy:
    • Parse GPX/FIT (start with GPX).
    • Compute cumulative distance and resample to 10 Hz.
  • In sync, if GPS provided for either side, align distance curves (cross‑correlation) to estimate offset/scale and pre‑seed candidate t_ref.

Acceptance Criteria

  • Works with one or both GPS tracks.
  • Falls back cleanly to visual retrieval.

7) Time‑Warp Fit

Implement

  • align/timewarp.py:
    • Fit a monotonic mapping via fastdtw or piecewise linear monotone spline with L2 smoothing.
    • Expose map_t_new_to_t_ref(t_new) and inverse map_t_ref_to_t_new(t_ref).

Acceptance Criteria

  • Strict monotonicity; median residual < 0.3 s on sample data.
  • Save to warp.npz with arrays + metadata.

8) Marker Schema & Transfer

Implement

  • markers/schema.py: reference CSV schema name,t_ref,colour?,comment? (seconds float).
  • markers/transfer.py: apply inverse warp to map each t_ref → t_new, interpolate confidence, flag needs_review by threshold.

Acceptance Criteria

  • Output new_markers.csv with name,t_new,confidence,needs_review,comment.

9) Exports

Implement

  • export/csv_export.py: already covered by transfer CSV.
  • export/edl_export.py: write CMX3600 EDL; 1‑frame events with comments.

Acceptance Criteria

  • EDL imports in Resolve/Premiere with visible markers.

10) Preview UI

Implement

  • ui/app.py (Streamlit):
    • Inputs: reference/new MP4, markers CSV, warp.npz.
    • Side‑by‑side players with linked scrubbing.
    • Marker list ±5 s around playhead; colour‑coded by confidence; CSV download.

Acceptance Criteria

  • streamlit run src/mtbsync/ui/app.py launches; approximate sync is acceptable.

11) Tests & Sample Data

Implement

  • Unit tests for:
    • Monotonic time‑warp and inverse mapping.
    • EDL formatting round‑trip sanity.
    • Marker transfer shape and thresholds.
  • Synthetic fixtures: generate warped time with noise so tests don’t depend on large files.

Acceptance Criteria

  • pytest passes in < 10 s locally.

12) Dev Ergonomics

Implement

  • Makefile/justfile: setup, lint, test, run-preview.
  • Pre‑commit: ruff, black, isort.
  • README.md: quick start + example commands.

Acceptance Criteria

  • One‑command setup and test run.

Prompt Templates

Claude Code (per‑ticket)

You are a senior Python engineer. Implement the next ticket in the mtb-sync repo.
Follow the acceptance criteria exactly. Keep code modular and documented.

<TICKET NAME>: <paste ticket content>

Constraints:
- Python 3.11, no GPU assumptions.
- Pure functions where possible; no global state.
- Use 'rich' for concise logging.
- Add docstrings and type hints.
- If format ambiguities arise, decide and document in README.

After changes:
- List files changed.
- Provide example CLI usage.
- Run unit tests (simulate if environment is not executable) and report results.

OpenAI CLI (chat)

openai chat.completions.create -m gpt-4.1 \
  -g "
You are a senior Python engineer. Implement the following ticket for a project called mtb-sync. Provide complete code blocks for the mentioned files. Explain only what's necessary to run it.

<TICKET NAME + CONTENT>
"

Quick Run Guide (for README)

# 1) Setup
python -m venv .venv && source .venv/bin/activate
pip install -U pip
pip install -e .

# 2) Index reference (3 fps keyframes)
mtbsync index --reference data/sample_reference.mp4 --fps 3 --out cache/ref_index.npz

# 3) Build pairs (with or without GPS)
mtbsync sync --reference data/sample_reference.mp4 --new data/sample_new.mp4 \
             --index cache/ref_index.npz --out cache/pairs.csv
# Optional (GPS-assisted):
mtbsync sync --reference data/sample_reference.mp4 --new data/sample_new.mp4 \
             --ref-gpx data/sample_reference.gpx --new-gpx data/sample_new.gpx \
             --index cache/ref_index.npz --out cache/pairs.csv

# 4) Fit time-warp (automatic during sync, or standalone)
# Note: sync command automatically generates timewarp.json during retrieval
mtbsync warp --pairs cache/pairs.csv --out cache/warp.npz

# 5) Transfer markers (using timewarp.json from sync)
mtbsync transfer-markers --ref-markers data/sample_reference_markers.csv \
                         --timewarp-json timewarp.json \
                         --out out/new_markers.csv \
                         --plot-overlay
# Outputs:
#   - new_markers.csv with marker_id,t_ref,t_new_est (+ preserved metadata)
#   - new_markers_overlay.png preview

# 6) Export EDL
mtbsync export edl --markers out/new_markers.csv --out out/new_markers.edl --fps 29.97

# 7) Preview UI
streamlit run src/mtbsync/ui/app.py

Performance

Parallel Retrieval

Use --threads to enable multi-threaded frame matching:

mtbsync sync --reference ref.mp4 --new new.mp4 --index ref.npz --out pairs.csv --threads 4

Fast Preset for Bulk Jobs

Use --fast to auto-tune parameters for large-scale processing:

mtbsync sync --reference ref.mp4 --new new.mp4 --index ref.npz --out pairs.csv --fast

The --fast preset automatically:

  • Sets threads >= 4 (parallel retrieval)
  • Tunes warp parameters for speed (relaxed RANSAC iterations/thresholds)

Timing Information

The sync command prints per-stage timings:

  • retrieval_sec - Frame matching time
  • warp_sec - Time-warp fitting/gating time
  • markers_sec - Marker auto-export time
  • total_sec - End-to-end pipeline time

Batch processing writes timings to batch_summary.csv for analysis across multiple pairs.

⚡ Performance Benchmarks

Stage Mean Time (s) Notes
GPS Alignment 0.28 Vectorised (np.interp) fast-path
Retrieval 1.42 4 threads (ThreadPoolExecutor)
Warp Fit 0.06 RANSAC + IRLS refinement
Marker Export 0.11 CSV → JSON + overlay
Total 1.87 ± 0.15 Typical 1080p pair (8 k frames)

💡 Use --threads 4 or --fast for large jobs. batch_summary.csv records per-stage timings for every pair.

Performance Status

Dashboard

Launch a local, zero-dependency dashboard to inspect artefacts:

# read-only
mtbsync dashboard --root ./batch_out --port 8000

# enable server-side export
mtbsync dashboard --root ./batch_out --port 8000 --allow-write

Features:

  • Marker selector (multiple new_markers*.csv)
  • Timing sparklines from batch_summary.csv
  • Download artefacts; optional POST /api/export-json when --allow-write is set
  • Live telemetry updates via Server-Sent Events (/api/perf/stream)

🧩 Threaded Dashboard Server

As of v0.10.4, the dashboard uses a threaded HTTP server to support long-lived Server-Sent Event (SSE) connections. This allows /api/perf/stream (the live telemetry feed) to run continuously while other endpoints (e.g. /api/files, /api/markers, /api/timewarp) remain responsive.

Key details:

  • The dashboard runs via ThreadingHTTPServer with daemon_threads=True for safe shutdown
  • Multiple browser tabs or connected clients can receive live telemetry simultaneously
  • Existing single-threaded usage is now fully backward-compatible

Usage example:

mtbsync dashboard --root ./batch_out --port 8000

Then open http://localhost:8000 — the telemetry table will update live as new runs finish.

🚀 GPU & Extended Telemetry

From v0.10.5, mtbsync records extended runtime metrics in perf.json:

  • CPU% and RSS memory (MB) when psutil is available
  • GPU utilisation (%) and VRAM used (MB) when NVIDIA NVML (pynvml) is available
  • Metrics surface in the dashboard table and update live via SSE

Telemetry is best-effort and never blocks the pipeline. To disable GPU probing:

# Disable GPU telemetry for sync
mtbsync sync ... --no-gpu

# Disable GPU telemetry for batch
mtbsync batch input_dir ... --no-gpu

Note: psutil/pynvml are optional. If not installed, GPU/CPU fields are omitted or set to null.

Telemetry Retention (perf.json)

For large batch runs, you can prune older telemetry artefacts:

CLI

# Keep newest 200 perf.json files under ./batch_out
mtbsync prune-perf --root ./batch_out --keep 200

# Dry-run first to see what would be deleted
mtbsync prune-perf --root ./batch_out --keep 200 --dry-run

Dashboard (requires --allow-write)

mtbsync dashboard --root ./batch_out --allow-write

Open the dashboard and use the Prune perf.json button to keep the newest N files. All operations are constrained to the selected root.

Streamlit Telemetry UI

Launch a rich, interactive telemetry dashboard using Streamlit:

mtbsync telemetry-ui --root ./batch_out

Features:

  • Interactive time-series charts for FPS, CPU%, GPU utilisation, VRAM, and RSS memory
  • Filtering controls (time range slider, recent N runs)
  • Smoothing toggle for cleaner trend visualization
  • CSV export of filtered telemetry data
  • Optional live updates (experimental)
  • Clean sidebar layout with metric selection

Installation:

Requires optional dependency:

pip install "mtbsync[telemetry]"

Usage example:

# Launch on default port (8501)
mtbsync telemetry-ui --root ./batch_out

# Custom port
mtbsync telemetry-ui --root ./batch_out --port 8502

The Streamlit UI complements the built-in HTML dashboard from mtbsync dashboard, providing richer interactivity and data exploration capabilities.

Side-by-Side Video Comparison UI

Review and annotate two videos side-by-side with synced markers and delta inspection:

mtbsync compare-ui --ref ref.mp4 --new new.mp4 --ref-markers ref_markers.csv --timewarp-json timewarp.json

Features:

  • Dual video panes (reference + new) for visual comparison
  • Marker overlay with timeline visualization
  • Delta summary statistics (MAE, P90, median, count)
  • Interactive search and filter for markers
  • Annotation system: add notes and labels to markers
  • Export annotations to CSV or JSON

Installation:

Requires optional dependency:

pip install "mtbsync[compare]"

Usage examples:

# Full comparison with all inputs
mtbsync compare-ui --ref ref.mp4 --new new.mp4 \
  --ref-markers ref_markers.csv \
  --new-markers new_markers.csv \
  --timewarp-json timewarp.json

# Launch from batch output directory
mtbsync compare-ui --root ./batch_out

# Custom port
mtbsync compare-ui --ref ref.mp4 --new new.mp4 \
  --ref-markers ref_markers.csv --port 8502

Workflow:

  1. Configure file paths in the sidebar (or pass via CLI)
  2. Click "Load Data" to import videos and markers
  3. Review marker deltas in the timeline chart
  4. Select markers to inspect details and jump to timestamps
  5. Add annotations (notes/labels) for quality review
  6. Export annotations for documentation or QA workflows

The comparison UI helps validate sync quality by showing actual vs predicted marker times, computing deltas, and allowing visual inspection of both videos at critical moments.

Tips:

  • CLI pre-population: Arguments passed via CLI (--ref, --new, etc.) automatically pre-fill the sidebar fields through environment variables
  • Headless launch: Use --no-browser flag to prevent auto-opening browser (useful for remote servers)
  • Multiple instances: Use different --port values to run multiple comparison sessions simultaneously

Time-series (CPU/GPU/FPS)

From v0.10.7, the dashboard includes lightweight time-series charts for performance metrics:

  • FPS (frames/sec) — computed from frames_processed / retrieval_sec when not present
  • CPU % — CPU utilisation percentage
  • GPU Util % — GPU utilisation percentage (requires NVML)
  • VRAM (MB) — GPU memory used (requires NVML)
  • RSS Memory (MB) — resident set size

How it works:

  • Charts pull from the last 500 perf.json artefacts (configurable via query parameter)
  • Data is ordered oldest→newest for temporal visualisation
  • Null values are skipped (shown as gaps in the chart)
  • No external JavaScript libraries required — pure inline SVG

API endpoint:

curl http://localhost:8000/api/perf/history?limit=500&fields=fps,cpu_pct,gpu_util,gpu_mem_mb,rss_mb

Note: GPU metrics require optional NVML (pynvml) at runtime. If unavailable, GPU charts show gaps or "(no data)".


Benchmark Quality

Evaluate sync quality across many runs (e.g., batch outputs) and generate comprehensive reports with CSV/JSON/HTML outputs.

Usage

# Basic usage — generates CSV + JSON by default
mtbsync benchmark-quality --root batch_out

# Include HTML report with inline charts
mtbsync benchmark-quality --root batch_out --html

# Custom output directory and fields
mtbsync benchmark-quality --root batch_out --out benchmark_reports --fields fps,cpu_pct,gpu_util,gpu_mem_mb,rss_mb

# Limit to 500 most recent runs
mtbsync benchmark-quality --root batch_out --limit 500 --html

# Show strict evaluation in summary
mtbsync benchmark-quality --root batch_out --strict

What It Does

The benchmark command:

  1. Discovers run folders under the specified root directory (sorted by modification time, newest first)
  2. Loads artefacts from each run:
    • timewarp.json (required) — time-warp fit quality metrics
    • perf.json (optional) — performance metrics (FPS, CPU, GPU, etc.)
    • pairs.csv or pairs_raw.csv (optional) — residual fallback computation
  3. Aggregates metrics:
    • Time-warp quality: ppm, inlier_frac, res_mae, res_p90
    • Performance: fps, cpu_pct, gpu_util, gpu_mem_mb, rss_mb
    • Success rates: ok (basic), strict_ok (strict criteria)
  4. Generates reports:
    • CSV: Detailed table with all metrics
    • JSON: Machine-readable format for automation
    • HTML: Self-contained report with KPIs and inline SVG charts

Strict Quality Criteria

When using --strict, runs are evaluated against production-grade gates:

  • Inlier fraction ≥ 0.50 (50% of pairs support the fit)
  • P90 residual ≤ 1.0 seconds (90th percentile within 1s)
  • PPM drift ≤ 500 (0.05% speed variation)

Runs passing all three criteria are marked strict_ok=True.

HTML Report Features

When --html is specified, the report includes:

  • KPI Dashboard: Total runs, OK%, Strict OK%, Mean P90, Mean PPM
  • Distribution Charts: Inline SVG histograms for P90 and PPM buckets
  • Detailed Table: Sortable table with all metrics per run
  • Self-contained: No external dependencies, works offline

Example Output

Benchmark Summary
  Total runs: 50
  OK: 47 (94.0%)
  Strict OK: 38 (76.0%)
  Mean P90: 0.45s
  Mean PPM: 234

Outputs:
  CSV: benchmark_out/benchmark_quality.csv
  JSON: benchmark_out/benchmark_quality.json
  HTML: benchmark_out/benchmark_quality.html

When to Use

  • QA workflows: Validate batch processing quality before delivering results
  • Parameter tuning: Compare sync quality across different configurations
  • Continuous integration: Generate quality reports as part of automated pipelines
  • Performance analysis: Track FPS, CPU, GPU trends across many runs
  • Documentation: Generate HTML reports for stakeholders or audits

Advanced Options

# Custom pattern for run folders
mtbsync benchmark-quality --root batch_out --pattern "run_2024*"

# Quiet mode (suppress console output, just write files)
mtbsync benchmark-quality --root batch_out --quiet

# Generate only JSON (skip CSV)
mtbsync benchmark-quality --root batch_out --no-csv --json

Exit Codes

  • 0: Success
  • 1: Unexpected error
  • 2: No runs discovered in root directory

Troubleshooting & Quality Tuning

Interpreting Time-Warp Quality

After running mtbsync sync, inspect timewarp.json to assess alignment quality:

{
  "ok": true,
  "model": "affine",
  "params": {"a": 1.0, "b": 0.0},
  "ppm": 0,
  "inlier_frac": 0.33,
  "residuals": {"mae": 0.89, "p50": 0.72, "p90": 2.0, "p95": 2.5}
}

Quality indicators:

  • a=1, b=0 → Identity warp (no temporal scaling/offset)
  • ppm=0 → Zero drift from 1:1 playback (consistent with identity)
  • inlier_frac=0.33 → Only 33% of pairs supported the model during RANSAC
  • P90=2.0s → 90th percentile residual is 2 seconds (⚠️ weak alignment)

Why ok:true despite poor quality?

Default gates are permissive:

  • max_ppm=1000 (allows up to 0.1% speed drift)
  • min_inlier_frac=0.25 (accepts models with only 25% support)

With 33% inliers and identity warp, it technically passes. However, P90=2s residuals mean marker transfer will be inaccurate by ±2 seconds.

Tightening Quality Gates

For stricter alignment requirements, use these parameters:

mtbsync sync \
  --reference ref.mp4 \
  --new new.mp4 \
  --index cache/ref_index.npz \
  --out cache \
  --warp-window-sec 0.3 \          # Tighter acceptance window (default: wider)
  --warp-inlier-thresh 0.05 \      # Stricter inlier definition (50ms @ 3fps)
  --warp-min-inlier-frac 0.5 \     # Require 50% support (up from 0.25)
  --warp-ransac-iters 1000 \       # More iterations for noisy data
  --warp-max-ppm 500               # Tighter drift tolerance (0.05% vs 0.1%)

Expected outcome: With P90=2s, these settings will likely produce ok:false, which is useful signal—the alignment isn't reliable for that pair.

Visual Inspection

Generate a scatter plot overlay to see alignment quality:

# Headless mode (generates PNG)
mtbsync viewer --headless \
  --ref-markers ref_markers.csv \
  --timewarp-json timewarp.json \
  --out-png alignment_overlay.png

# Interactive GUI (file picker dialogs)
mtbsync viewer

The overlay shows:

  • Diagonal line = perfect alignment
  • Deviations = temporal misalignment
  • Clusters = consistent offset regions

Parameter Tuning Guidelines

Symptom Likely Cause Suggested Fix
Low inlier_frac (<0.4) Poor feature matches, scene cuts Increase --warp-ransac-iters 1000, tighten --warp-min-inlier-frac 0.5
High ppm (>500) Videos at different speeds Check source material, reduce --warp-max-ppm 500
High P90 (>1s) Inconsistent temporal alignment Reduce --warp-window-sec 0.3, increase --warp-inlier-thresh 0.05
Identity warp (a=1, b=0) with high residuals Visual features don't match temporal structure Add GPS data, or manually verify videos are from same track

Batch Quality Review

Use the dashboard to quickly review alignment quality across many pairs:

mtbsync dashboard --root ./batch_output --port 8000

In the Batch Summary card, look for:

  • timewarp_ok=true with P90 < 0.5s → good alignment
  • ⚠️ timewarp_ok=true with P90 > 1.0s → weak alignment (investigate)
  • timewarp_ok=false → alignment failed gates (expected for dissimilar runs)

Phase 2 (Later Tickets)

  • Add SuperPoint + SuperGlue path (--matcher super).
  • RAFT optical flow refinement around matched frames.
  • FCPXML export for Final Cut; Premiere Pro XML.
  • Manual anchor pairs UI to re‑fit time‑warp interactively.
  • SQLite cache for descriptor indexes and run metadata.
  • Basic metrics dashboard (pair coverage, residuals, confidence histogram).

Why an iterative ticket flow?
It enforces small, testable steps with explicit acceptance criteria, which yields cleaner commits, faster debugging, and easier refactors as the project grows.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mtbsync-0.10.11.tar.gz (162.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mtbsync-0.10.11-py3-none-any.whl (82.9 kB view details)

Uploaded Python 3

File details

Details for the file mtbsync-0.10.11.tar.gz.

File metadata

  • Download URL: mtbsync-0.10.11.tar.gz
  • Upload date:
  • Size: 162.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mtbsync-0.10.11.tar.gz
Algorithm Hash digest
SHA256 8a1534648712aee2eafef1e243a57e7f639b0f4e1c9ec7e5c584cb15cba15a87
MD5 5ea87355800b29de8a7a5770286e5ded
BLAKE2b-256 f7dc0d840bb2c2baa9b9c833c4e81ae79d1d3c8afda7c27945994ded050efad9

See more details on using hashes here.

Provenance

The following attestation bundles were made for mtbsync-0.10.11.tar.gz:

Publisher: release.yml on markg72/goprosync

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mtbsync-0.10.11-py3-none-any.whl.

File metadata

  • Download URL: mtbsync-0.10.11-py3-none-any.whl
  • Upload date:
  • Size: 82.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mtbsync-0.10.11-py3-none-any.whl
Algorithm Hash digest
SHA256 9946e826d0c5f159711efd81af1a02af2d37568ca927db57b5f06f877258d980
MD5 1090b945f060b073b391cf2e08e0079e
BLAKE2b-256 f770dbd4301e1dff577438f94d929ca26bf8aa5dfceb9eda9e5d702d19dd7fd5

See more details on using hashes here.

Provenance

The following attestation bundles were made for mtbsync-0.10.11-py3-none-any.whl:

Publisher: release.yml on markg72/goprosync

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page