Orchestration hub routing work between Claude Code sessions via tubemail channel plugin
Project description
TubeMail for Claude Code
Claude Code workers on a wire.
TubeMail lets one Claude Code session drive another. Start a worker in any directory, and any other Claude Code session (or MCP-aware agent) can send it messages, receive replies, approve permission prompts remotely, restart it, and watch what it's doing — all over HTTP/SSE + WebSocket.
What this is
- A hub service (FastAPI + FastMCP) that brokers events between workers and exposes a web UI on the same port.
- A channel (Claude Code plugin) that relays events between a
worker's
claudesession and the hub. - A process manager (
tubemail.manager) that runsclaudein a pty, handles restarts, idle detection, and harness commands (/compact,/clear,/exit,/mcpreconnects). - A bash wrapper (
claude-tm) that sources env vars and keeps the manager alive across restarts. - A web UI (React + Vite) served alongside
/mcpon the same port, with a live worker roster, a permission inbox, integrated browser terminals via WebSocket pty bridge, saved-message templates, and optional per-worker session recording.
What this is not
- Not an orchestrator. TubeMail ships transport plus an operator surface; it does NOT implement orchestration policy (failure routing, retry, scheduling, load balancing). Build that on top.
- Not a replacement for Claude Code's native tools. Workers are vanilla
claudeprocesses.
Architecture
┌──────────────────┐
browser ◀──── HTTPS + WSS pty bridge ─────▶│ TubeMail hub │
│ FastMCP :8004 │
│ + web UI at / │
Orchestrator ◀── HTTP/SSE (MCP /mcp/) ────▶│ │
(any MCP client) └────────┬─────────┘
│
│ HTTP/SSE
▼
┌─────────────────────────────┐
│ Worker session │
│ claude-tm (bash wrapper) │
│ └─ tubemail.manager (pty) │
│ ├─ tubemail-channel │
│ └─ claude --name ... │
└─────────────────────────────┘
One container, one port, three protocols: HTTP/HTTPS for the JSON API and MCP, SSE for forwarder event streams, WebSocket for the browser terminal pty bridge.
Install
Two packages — one for each side.
| Side | Install | Binary / tools |
|---|---|---|
| Worker | pip install tubemail-channel |
claude-tm (Python launcher → tubemail.manager → claude) |
| Hub | pip install tubemail |
MCP server at :8004; tm_* tools; web UI |
For local development:
git clone git@github.com:Disciplin-run-org/tubemail.git
cd tubemail
pip install -e channel/ --no-build-isolation
pip install -e .[dev] --no-build-isolation
npm --prefix frontend ci
npm --prefix frontend run build
Dev mode — live reload
The hub container runs uvicorn --reload with the host src/
bind-mounted read-only over the image's editable-install path. Edits to
src/tubemail_hub/ on the host auto-restart the server inside the
container — no rebuild needed for Python changes.
docker compose up --build tubemail # first run — builds the image
# from here on:
# edit src/tubemail_hub/… → uvicorn reloads automatically
# edit frontend/src/… → npm --prefix frontend run build to ship the bundle
docker compose restart tubemail # only needed for entrypoint / compose changes
VERSION and frontend/dist are mounted the same way, so a bump or a
rebuild on the host is live without a container rebuild.
Quickstart
-
Start the hub:
echo 'TUBEMAIL_SECRET=change-me' > .env docker compose up -d tubemail
-
Open the web UI at http://localhost:8004. On localhost the bearer is auto-loaded; remote browsers paste the
TUBEMAIL_SECRETvalue into the auth gate. The Workers tab shows the (initially empty) roster. -
Start a worker in any project directory:
pip install tubemail-channel cd /path/to/your/project && claude-tm
pip install tubemail-channelputsclaude-tmon your PATH. ExportTUBEMAIL_SECRETin your shell, or drop a.envcontaining it in the project directory (or at~/.config/tubemail/.env) —claude-tmauto-loads it. The worker registers as<dirname>-tmand appears in the roster. Use--role <name>to run multiple workers per project. -
From an orchestrator session (with the
tubemailMCP server added to.mcp.json):tm_list_workers() tm_send(worker="your-project-tm", message="what's in this repo?") tm_wait_for_activity(worker="your-project-tm", since=<event_id>) tm_receive(worker="your-project-tm", since=<event_id>)
A step-by-step walkthrough lives in TUTORIAL.md.
Web UI
The Workers tab. Color-coded status badges, manager indicator, context-%, recording toggle, integrated terminals — one view of every session.
Served on the same port as the MCP server.
- Workers tab — live roster of every connected session, grouped by
project. State badges (
idle/busy · 47s/waiting_permission/offline/exited), manager-up indicator, version stamp, context-% column, recording toggle, sortable columns. Click a row to open the integrated terminal pane. - Permissions tab — every pending tool-approval prompt across every worker, in one inbox. Keyboard allow/deny, group-by-worker, bulk allow-by-tool-name (one-shot).
- Saved Messages tab — named templates that can be sent to any worker by the operator (UI) or the orchestrator (MCP). Run logs are persisted on the hub.
- Settings tab — recording defaults (global on/off, max file size, files kept per worker), session bearer management.
- Integrated terminal pane — full xterm.js with a WebSocket pty bridge. Shift+Enter inserts a hard newline that Claude reads as "modifier held, do not submit." Ctrl+C copies a selection if one exists, otherwise SIGINT. Ctrl+V pastes. Ctrl+= / Ctrl+- / Ctrl+0 zoom (per-worker, persisted in localStorage). Pop-out windows tile multiple workers across the desktop.
Recording
Optional per-worker session recording, off by default. When enabled, the hub tees the worker's pty output to two parallel files:
<data>/recordings/<worker>/<ts>.cast— asciinema v2 format. Replay withasciinema play <file>.<data>/recordings/<worker>/<ts>.frames.jsonl— one ANSI-stripped text frame per pty chunk; whattm_get_recordingreturns. Optimized for grep and time slicing.
Files rotate when the active .cast exceeds the configured size limit;
the oldest are GC'd so total per-worker bytes stays under
max_bytes_per_file * keep_files.
Toggle from the Workers tab (per-worker Rec column) or via MCP:
tm_recording_toggle(worker="iris-qa-tm", enabled=True)
tm_recording_status(worker="iris-qa-tm")
tm_get_recording(worker="iris-qa-tm", grep="permission", limit=50)
Global defaults live in the Settings tab and persist across hub
restarts under <data>/hub-config.json.
Tool surface (hub MCP)
| Tool | What |
|---|---|
| State + flow | |
tm_list_workers |
Who is connected (color-coded table, project-grouped) |
tm_status |
idle / busy / waiting_permission for one worker. Trailing-inbound older than 10 min decays to idle. |
tm_send |
Deliver a message to a worker (or harness command to its manager) |
tm_receive |
Read a worker's event timeline |
tm_wait_for_activity |
Block until the worker produces an event |
tm_my_inbox |
Worker-facing: what messages arrived while I was offline |
tm_interrupt |
Pause a worker |
tm_clear_and_send |
Atomic: /clear then send, avoids race on permission prompt |
| Permissions | |
tm_pending_permissions |
List tool-approval prompts across workers |
tm_respond_permission |
Allow / deny a pending permission |
| Process control | |
tm_restart |
Clean restart via /exit + --continue |
tm_stop |
Kill a worker |
tm_purge_worker |
Remove a worker's registry entry |
tm_keystroke |
Send raw keystrokes to a worker's pty |
tm_screenshot |
Read recent stdout from a worker |
tm_health |
CPU / memory / uptime — is the worker actually working? |
tm_update_manager |
Re-exec the manager process to pick up new channel code |
tm_reconnect_mcp |
Drive the worker's /mcp UI to reconnect a failed server |
| Recording | |
tm_recording_toggle |
Per-worker recording on/off |
tm_recording_status |
Files on disk, active file, size |
tm_get_recording |
Read frames filtered by time range and regex |
| Saved messages / flows | |
tm_save_flow / tm_list_flows / tm_delete_flow |
CRUD for named templates |
tm_run_flow |
Send a saved flow to a worker, returns a run_id |
tm_get_run_log / tm_finish_run |
Track flow runs |
| Meta | |
get_instructions |
Re-read the server's usage notes |
refresh_tools |
Pick up new tools after a hub rebuild |
Environment
| Var | Required | Purpose |
|---|---|---|
TUBEMAIL_SECRET |
yes | Shared bearer secret between hub, channel, and browser |
TUBEMAIL_HUB_URL |
no (default http://localhost:8004) |
Where channels connect |
TM_WORKER_NAME |
no | Override the auto-derived worker name |
TUBEMAIL_LOG |
no (default WARNING) |
Channel log level |
TUBEMAIL_LOG_FILE |
no | Path to channel log file |
TUBEMAIL_DATA_DIR |
no (default /data/tubemail) |
Hub state + recordings root |
TUBEMAIL_DISABLE_DEV_BOOTSTRAP |
no | If 1, the web UI never auto-loads the bearer over loopback |
TUBEMAIL_WORKER_PURGE_MAX_AGE_S |
no (default 3600) |
Stale-worker purge threshold |
Security
- Bearer auth on every hub endpoint (
TUBEMAIL_SECRET). Constant-time compare viahmac.compare_digest— no timing side-channel on the secret. - Single-use, 30s WebSocket tickets for the pty bridge. Browsers can't
set custom headers on
new WebSocket(), so the client POSTs a bearer-authed request to/api/pty-ticket, gets a single-use ticket, and openswss:/ws/pty/<worker>?ticket=<t>. The ticket is consumed on use and expires in 30s. - Worker-name validation (
^[A-Za-z0-9][A-Za-z0-9 _.-]{0,63}$) on every{worker}path parameter and in the engine, so a crafted name cannot escape the state directory. Files in the workers directory whose stem does not match the pattern are ignored on startup. - HTTPS auto-detect — drop
server.crt+server.keyinto the data volume and the hub serves TLS. Without them it falls back to plain HTTP for localhost development. - Generate a strong
TUBEMAIL_SECRET:python -c 'import secrets; print(secrets.token_urlsafe(32))'
scripts/heal.pydoes this for you if.envis missing.
Findings from the most recent security review are kept at
jjstack/security-review.md.
Planning and design docs
The jjstack/ directory holds design artifacts from planning sessions
(office-hours, CEO review, engineering review, DX review, design
review, investigations). They are checked into the repo so the decisions
that shaped the code are traceable.
The current as-built design doc is
jjstack/jesper-main-design-20260425-092030.md;
its 2026-04-23 predecessor is preserved as superseded for history. The
four review docs (CEO, eng, DX, design) live in jjstack/ceo-plans/.
License
MIT — see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file tubemail-1.0.1.tar.gz.
File metadata
- Download URL: tubemail-1.0.1.tar.gz
- Upload date:
- Size: 96.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b77da078e346e9c33682f07d3ab96d1e681bf79f46fb259f51f246534f6ec34f
|
|
| MD5 |
5ad8f0adafdf19040e572532ca696a32
|
|
| BLAKE2b-256 |
7a4f0e42e16f0569705c1795c916b3d2878b418a8f2abf4023e7bd41584ef970
|
File details
Details for the file tubemail-1.0.1-py3-none-any.whl.
File metadata
- Download URL: tubemail-1.0.1-py3-none-any.whl
- Upload date:
- Size: 76.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2793ba969411328cddb5d75ed4d99c1427944c69dd7781b9fd8fadde05162463
|
|
| MD5 |
856c7d2611e8894224253073d439db04
|
|
| BLAKE2b-256 |
0c20e104331a98ea38226c9a3d4e22623cbc906c2d6a516b8cb57ac3b1d076b8
|