Skip to main content

Local AI triage for IP-camera events (Tesla Sentry, Wyze, Reolink, UniFi, Ring, Nest)

Project description

sentrytriage

Local AI triage for Tesla Sentry events. Watches your SentryClips folder, sends each event's keyframes to a vision-language model, and classifies the event so you only get notified about the ones that actually matter.

What problem this solves

Sentry mode is great in theory and exhausting in practice. A busy parking lot generates dozens of events per day; the vast majority are leaves, cats, or cars driving past. Existing OSS for Sentry footage handles either viewing (Sentry Studio, exportdash.cam) or search (SentrySearch — 4k★ in two months). The triage slot — "ignore the noise, surface the real events, give me a daily highlight reel" — is empty.

sentrytriage fills it.

How it works

TeslaUSB / USB drive
        │
        ▼
  SentryClips/
   ├── 2026-05-08_14-22-31/
   │   ├── front.mp4
   │   ├── back.mp4
   │   ├── left_repeater.mp4   (and pillars on HW4)
   │   └── ...
   │
   ▼ folder watcher
sentrytriage daemon
   │
   ├── ffmpeg keyframe extract (4 frames × 2-6 cams)
   ├── VLM classify (gpt-4o-mini / Qwen2.5-VL via Ollama / Gemini Flash)
   │     → {interesting: bool, category, subjects, caption, confidence}
   ├── persist to SQLite
   └── (v0.2) suppress boring → BoringClips/, build daily highlight reel, push notify

The classification prompt and Pydantic schema are designed for low false-positive rates: defaults to "not interesting" unless there's a real reason. Every threshold and the prompt itself live in editable files (prompts/classify.md, config.example.toml) so you can tune to your driveway.

Status — v0.11, alpha

What works today:

  • 4 source plugins (added in v0.4 via tesla-clip-tools v0.2): --source-type tesla|wyze|reolink|unifi. The triage engine is unchanged across sources; only the folder-layout reader differs.
  • Walk a SentryClips/ directory and parse the canonical TeslaCam folder layout (4-cam HW3 + 6-cam HW4)
  • Walk a Wyze SD-card layout (<YYYYMMDD>/<HH>/<MM>.mp4)
  • Walk a flat Reolink-export directory (3 filename patterns supported)
  • Walk a UniFi Protect export (flat or date-partitioned, with optional event-type tag)
  • Extract evenly-spaced keyframes per camera via imageio (bundles ffmpeg, no system dep)
  • Two VLM backends: OpenAI (gpt-4o-mini default) and Ollama (Qwen2.5-VL local — no API costs)
  • triage classify <event-folder> — one-off classify (Tesla layout), prints JSON verdict
  • triage watch <root> --source-type wyze --notify pushover — polling daemon that classifies new events, persists to SQLite, optionally pushes to Pushover or Telegram on every interesting event
  • triage reel — concat all interesting events from the last 24h into a single reel mp4 with caption overlays
  • triage suppress — move boring events (high-confidence false) into a sibling BoringClips/ folder. Never deletes.
  • triage notify-test — verify your notifier credentials before deploying
  • triage demo (v0.6)triage demo-seed populates the local SQLite with ~60 deterministic synthetic events; triage demo seeds-and-serves the FastAPI dashboard at http://127.0.0.1:8001/, opens your browser, and lets you click through interesting/boring filters and per-event drilldowns. Lets anyone (no Tesla, no API keys, no real cameras) see what triage looks like in 30 seconds.
  • Web dashboard (v0.6)sentrytriage.web:app (FastAPI + Jinja) renders a clean, dark dashboard with header stats, category histograms, recent-events tables, and per-event detail pages. JSON API at /api/categories and /api/events. Reads from the same SQLite the daemon writes, so it's live during triage watch.
  • Thumbs feedback (v0.7) — every event detail page has 👍 / 👎 buttons that POST to /events/{id}/feedback. Feedback lands in a sibling Feedback table (the VLM's verdict is never overwritten) so you keep a clean record of where you and the model disagreed. The dashboard now shows your agreement rate (Agreement: 86%) and per-class override counts; the new /api/feedback/stats endpoint exposes the same data for scripts.
  • Events-per-day chart (v0.7) — inline SVG bar chart on the dashboard for the last 14 days; no JS chart dep, prints fine, hover for exact counts.
  • Overrides export (v0.8)/overrides HTML page lists every event you disagreed with the VLM on; /api/overrides returns the same as JSON; triage export-overrides --out training-data.jsonl dumps a clean training-data file (one JSON object per override with caption, subjects, source folder, both verdicts) ready for the v0.9 prompt-tuning workflow. Dashboard now has a "Recent overrides" panel with the 5 most-recent disagreements.
  • Prompt tuning (v0.9)triage tune-prompt reads your overrides and appends them as few-shot "you got this wrong" examples under a fresh ## Examples from your overrides section in the prompt. Default writes to prompts/classify.tuned.md so you can diff first; --apply overwrites prompts/classify.md directly. Idempotent — re-running after more overrides replaces the previous section instead of duplicating it.
  • A/B evaluation (v0.10)triage evaluate --backend mock splits your overrides into a TRAIN set (used to tune the prompt) and a held-out TEST set (used only for evaluation). Reports baseline-vs-tuned accuracy on each, plus a per-category breakdown. The included mock classifier simulates a VLM that learns from few-shot examples in the prompt, so you can demo the entire feedback loop with no API key. The split is deterministic (--random-seed 42) and the CLI warns when the test set is too small to be meaningful — this stops the "100% accuracy" headline from being misleading overfit.

What's new in v0.11

  • Real --backend openai|ollama on triage evaluate. Until v0.10 the evaluate command only ran against the deterministic mock classifier — now it can also score the baseline-vs-tuned prompt against the actual VLM you'll deploy. The flow is hybrid: keyframes-first, text-only fallback:
    • If the event's source folder still exists on disk, the classifier samples keyframes (same code path as triage classify) and calls the VLM with images + per-image captions. This is the apples-to-apples comparison.
    • If the folder is missing (e.g. you ran triage demo-seed with synthetic events, or you wiped SentryClips/ since classification), the classifier falls back to a text-only call: it hands the VLM the stored caption + subjects + category and asks it to re-derive the verdict using the same EventClassification schema. Useful for prompt-tuning iteration without paying to re-encode every video.
    • If both paths fail (no folder, no caption, or a transient API error), the classifier returns None with a warn[openai]: ... line on stdout, and compare_prompts counts that event as a miss for both prompts (so the delta isn't poisoned).
  • The CLI help text on triage evaluate --backend now lists all three backends and what each one needs.
  • v0.12 — embedded video playback. The per-event detail page now embeds an HTML5 <video> element per cam (with a lazy thumbnail strip up top, generated once via imageio and cached in a .thumbs/ sibling dir). Two new routes — GET /clips/{event_id}/{filename} and GET /clips/{event_id}/{filename}/thumb.jpg — stream files via FileResponse, and both reject any path that resolves outside SENTRYTRIAGE_CLIPS_ROOT (default ~/Tesla/SentryClips) with a 403. Demo mode keeps working: synthetic events with non-existent folders render a graceful "no playable videos found" placeholder instead of broken <video> tags.

What's coming in v0.12

  • Embedded <video> playback in the per-event detail (currently shows source-folder path only)
  • Anthropic + Gemini VLM backends
  • Discord + email notifiers
  • SEI metadata-aware triage (suppress events recorded while moving, etc.)
  • Multi-source dispatch: --sources tesla,wyze --roots /Tesla/SentryClips,/wyze/SD so one daemon triages everything

Try it without a Tesla (demo mode)

uv sync
uv run triage demo

This seeds the local SQLite with ~60 synthetic Sentry events (mix of interesting / boring across all categories) and opens http://127.0.0.1:8001/ in your browser. The data is deterministic — the same seed produces the same dashboard each time, so you can take screenshots that won't drift. The launcher scripts at the workspace root (start-triage-demo.ps1, start-triage-demo.sh) wrap this for one-double-click setup.

Quick start

# Requires Python 3.12+ and ffmpeg on your PATH.
git clone https://github.com/Raymondriter/sentrytriage.git
cd sentrytriage
uv sync
cp config.example.toml config.toml         # edit to taste

# --- Hosted (OpenAI, default) ---
export OPENAI_API_KEY=sk-...
uv run triage classify "/path/to/SentryClips/2026-05-08_14-22-31"
uv run triage watch    "/path/to/SentryClips" --notify pushover

# --- Local (Ollama, free) ---
ollama pull qwen2.5vl:7b
uv run triage watch "/path/to/SentryClips" --backend ollama --model qwen2.5vl:7b

# --- Daily reel + suppression ---
uv run triage reel --since-hours 24 --duration-seconds 60
uv run triage suppress --threshold 0.7

# --- Test notifier credentials ---
export PUSHOVER_TOKEN=... PUSHOVER_USER=...
uv run triage notify-test --backend pushover

Cost estimate (gpt-4o-mini, 4 keyframes × 2 cams = 8 images per event): roughly $0.001-0.003 per event. A busy day of 100 events is ~$0.10-0.30. The Ollama path (Qwen2.5-VL 7B on a Mac M2+) is free.

No Tesla yet? Generate a fixture.

uv sync --extra fixture
uv run python tools/generate_fixture.py --root tests/fixtures/SentryClips
uv run triage classify "tests/fixtures/SentryClips/2026-05-08_14-22-31"

The synthetic clips exercise the full pipeline (sampler → VLM → store → reel) but the VLM verdicts won't be meaningful — the frames are color-coded animations, not real Sentry scenes. See tests/fixtures/README.md.

Design choices worth knowing

  • Default to "not interesting". The whole point is to suppress noise. Tune the prompt down, not up.
  • Operate on output .mp4 files only. This is intentionally decoupled from the live Tesla / Fleet API surface so Tesla can't break it with a firmware push. The TeslaCam folder layout has been stable for 6+ years.
  • Source abstraction (sources/base.py). v0.2 adds sources/wyze.py, sources/reolink.py, sources/unifi.py so the same triage engine works for any IP camera output.
  • VLM backend abstraction (vlm/base.py). Swap OpenAI for Ollama / Gemini / Anthropic without touching the daemon.
  • Structured output via Pydantic. Every verdict has the same shape; the model never returns prose.
  • Never auto-delete. Suppression only moves clips between folders.

Comparison to neighbors

Tool What it does Composes with sentrytriage?
SentrySearch Natural-language search across Sentry library Yes — they're complementary; triage filters, search retrieves
SentryBlur Single-clip face / plate redaction Yes — pipe interesting=true clips to SentryBlur before sharing
Sentry Studio Cross-platform 6-cam viewer with SEI dashboard Yes — Studio is the viewer; triage is the notifier
exportdash.cam Browser-only WebCodecs export Yes — different layer

This project does not compete with any of them; it sits one layer above and routes attention.

Want to help?

Open issues for false positives ("this should have been flagged interesting") and false negatives ("this was just a leaf"). The prompt in prompts/classify.md is meant to be edited, and PRs that add sources/* for other cameras (Wyze, Reolink, UniFi Protect, Ring) are very welcome.

Screenshots

The dashboard rendered against synthetic demo data (triage demo):

Dashboard with 14-day chart, agreement panel, and recent overrides

Filterable events table

Override review page — disagreements with the VLM, ready to export

For an asciinema demo of the full classify → thumb → tune → evaluate loop, see docs/asciinema/demo.cast.

Changelog

See CHANGELOG.md. Versions follow Keep a Changelog and the project uses SemVer.

License

MIT. See LICENSE.

CI

ci

GitHub Actions runs ruff + pytest on Python 3.12 and 3.13 against every push and PR. See .github/workflows/ci.yml. Until tesla-clip-tools is published to PyPI, the standalone CI strips the [tool.uv.sources] table and resolves it as a regular dependency; the workspace-level monorepo CI at C:\Dev\tesla\.github\workflows\ci.yml keeps using the path-editable sibling.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sentrytriage-0.12.0.tar.gz (325.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sentrytriage-0.12.0-py3-none-any.whl (43.6 kB view details)

Uploaded Python 3

File details

Details for the file sentrytriage-0.12.0.tar.gz.

File metadata

  • Download URL: sentrytriage-0.12.0.tar.gz
  • Upload date:
  • Size: 325.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.12 {"installer":{"name":"uv","version":"0.11.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for sentrytriage-0.12.0.tar.gz
Algorithm Hash digest
SHA256 7588b311efcf044c350a7b2f87fe5d9cf422eab2435090b7ffe88f573b17020f
MD5 691192ab47929b4088426352524b43af
BLAKE2b-256 233a5c5e3d22a0d9e785ba134d460b03241c4268c06d80cfac7e1ce877927834

See more details on using hashes here.

File details

Details for the file sentrytriage-0.12.0-py3-none-any.whl.

File metadata

  • Download URL: sentrytriage-0.12.0-py3-none-any.whl
  • Upload date:
  • Size: 43.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.12 {"installer":{"name":"uv","version":"0.11.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for sentrytriage-0.12.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e4918c9d0f6cdb2bef58f8a2d40add511e30d49888d3481b4eeadbda2c17df9f
MD5 f888b8f3809d3d2fbf4d6b13da28e032
BLAKE2b-256 e4bd3b78ff6ba848145ea1f585f801e9df26f48e8c43f12f49c487cd19f04109

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page