Skip to main content

MCP server that routes Q&A across all your projects — locally or over SSH. Part of the FleetQ ecosystem.

Project description

Harbormaster

MCP server that routes Q&A across all your projects — locally or over SSH. Part of the FleetQ ecosystem.

PyPI License: MIT Status

What it does

You work across many projects, each with its own CLAUDE.md and Serena memories. Switching cwd loses context. Harbormaster lets one Claude Code session ask any project a question without changing directory — the project's subagent loads its own memory, answers, and returns a summary.

Optional SSH fan-out lets the same tools target remote VPS hosts. Optional FleetQ adapter makes Harbormaster a first-class citizen of the FleetQ Bridge ecosystem (Platform Tool, A2A Agent Cards, federated knowledge graph).

Tools

Tool Purpose Cost
list_projects(host=None) Enumerate configured projects (local) or remote dir listing (SSH). ~50 ms / ~1 s
list_hosts() Configured [hosts] + ~/.ssh/config Host aliases. ~5 ms
project_status(name, host=None) Git log, Serena memories, log tails. ~200 ms / ~2 s
ask_project(name, question, max_turns=5, host=None) Spawn claude -p in project cwd, return ≤ 800-word summary. ~30 s / ~90 s
delegate_task(name, task, deliverable, allow_writes=False, host=None) Read-only delegation; v1 fails closed for writes. ~60 s / ~90 s
fan_out_ask(question, project_filter=None, host_filter=None, max_concurrency=5, max_turns=3) Parallel multi-project Q&A. Returns one section per target. ~max_turns × claude_p_time × ⌈targets/max_concurrency⌉

More tools (recall_qa, …) land in v1.1–1.2. See docs/architecture-harbormaster.md.

Install

pipx install harbormaster-mcp
# or run without install:
uvx harbormaster-mcp

Register in Claude Code:

claude mcp add --scope user harbormaster harbormaster-mcp

Or in Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json):

{
  "mcpServers": {
    "harbormaster": {
      "command": "/opt/homebrew/bin/harbormaster-mcp",
      "env": {}
    }
  }
}

Live UI (optional)

Install with the [ui] extra and run the dashboard alongside (or instead of) the MCP server:

pipx install 'harbormaster-mcp[ui]'
harbormaster-ui --port 7531
# open http://127.0.0.1:7531/

v1.0.0a4 ships:

  • Dashboard at / — project grid with framework / git / Serena / CLAUDE.md badges (HTMX + Alpine + Tailwind via CDN, ~no build step).
  • GET /api/projects — JSON list of every project Harbormaster discovers (use this to script your own dashboards).
  • GET /api/health{"status":"ok","version":"..."} for liveness probes.

The UI is a separate process from the MCP server. Run both — they read the same TOML config so projects discovered by one are visible to the other. SSE feed of live MCP queries lands in v1.0.0a5.

HTTP / SSE transport

For remote MCP clients or running outside the desktop client, Harbormaster can speak SSE / streamable-http instead of stdio. A bearer token is required — there is no auth-disabled HTTP mode.

export HARBORMASTER_MCP_TOKEN=$(python -c 'import secrets; print(secrets.token_urlsafe(32))')
harbormaster-mcp --transport sse --host 127.0.0.1 --port 7532
# or the new MCP spec transport:
harbormaster-mcp --transport streamable-http --port 7532

Clients send the token as Authorization: Bearer <token>. Missing or wrong tokens return 401.

Override the env-var name with --auth-token-env MY_VAR if you keep secrets under a different name. Use --host 0.0.0.0 only if you understand the implications — the bearer token is the only thing between the open port and your projects.

Run harbormaster-mcp --help for the full flag set.

Configure

Zero-config by default — Harbormaster discovers projects under ~/htdocs/* if it exists. For any other layout, drop a TOML file at ~/.config/harbormaster/config.toml:

[projects]
glob = ["~/code/*", "~/work/*"]
exclude = ["**/node_modules/**", "**/vendor/**"]

[hosts.friday]
ssh_host = "katsarov-server.local"
remote_htdocs = "~/htdocs"

[hosts.hetzner-1]
ssh_host = "hetzner-1.example.com"
remote_htdocs = "/var/www"

A per-project override at ./.harbormaster.toml in your cwd takes precedence over the user-level config.

Full schema and all options: docs/architecture-harbormaster.md §3.

Remote hosts

Every project-targeting tool accepts an optional host parameter. With host set, Harbormaster runs the equivalent command on that SSH host:

> ask_project(name="pricex", question="quick health check?", host="friday")
[ssh friday bash -lc 'cd ~/htdocs/pricex && claude -p ...']

Pre-flight on each remote host:

  1. Install Claude Code: npm i -g @anthropic-ai/claude-code.
  2. Authenticate once: claude (this is a separate Anthropic seat per host).
  3. Ensure project paths exist with their CLAUDE.md / .serena/ in place.
  4. Confirm passwordless SSH from your machine (BatchMode=yes is enforced).

v1 limits

  • Read-only delegation (allow_writes=True returns an error).
  • 60 s local / 90 s remote subprocess timeout.
  • 800-word output cap (full output dumped to /tmp/harbormaster-*.md on truncation).
  • Remote list_projects returns a flat list of directory names (rich metadata is local-only — gathering it remotely would mean N round-trips).

Status

v1.0.0a11 — End-to-end streaming sprint shipped 2026-05-09. The FleetQ Bridge now consumes SSE from harbormaster (`stream=true` flag forwards `text/event-stream` bytes verbatim through Laravel `response()->stream` with `X-Accel-Buffering: no`), and `ClaudeBackend.ask_local_stream` parses claude-code's `--output-format stream-json` so per-token deltas are now extractable inside the daemon (wiring into the SSE dispatch is the a12 follow-up). v1.0.0a10 added the daemon-side SSE wire shape; a11 closes the FleetQ side and lays the backend rails. The 6-week roadmap to general availability:

Phase Weeks Focus
v1.0 1–2 Local + SSH + Live UI scaffold + PyPI alpha
v1.1 3–4 FleetQ Bridge / Platform Tool / A2A integration
v1.2 5–6 Q&A history, federated KG, auto project graph

See docs/design-harbormaster.md for the full design.

Lineage

Harbormaster v1.0 grew out of project-router-mcp v0.1 (2026-05-08). v0.1 git history is preserved on this repository — the v0.1 single-file server lived at src/server.py and remains in commits prior to the v1.0 scaffolding refactor.

Architecture

Single Python process hosting an MCP server (stdio + HTTP/SSE), an embedded Live UI, and an optional FleetQ adapter. Pluggable backend per host (default: claude -p). All shell-bound strings pass through shlex.quote.

Detailed component diagrams, transport choices, and integration contract: docs/architecture-harbormaster.md.

FleetQ Bridge integration (optional)

Install with the [fleetq] extra and Harbormaster can register itself as a Bridge daemon in your FleetQ deployment, advertising its 6 MCP tools to the platform:

pipx install 'harbormaster-mcp[fleetq]'

In your config TOML:

[fleetq]
enabled = true
register_as_bridge = true
base_url = "https://app.fleetq.net"   # or your self-hosted FleetQ URL
api_token_env = "FLEETQ_API_TOKEN"    # env var holding the Sanctum token
heartbeat_interval = 30               # seconds between heartbeats

Then export your Sanctum token (must have a team:<uuid> ability) and run the MCP server:

export FLEETQ_API_TOKEN=...
harbormaster-mcp

Harbormaster shows up in your FleetQ Connections UI as harbormaster on <hostname>. v1.0.0a6 ships register + heartbeat + disconnect; the reverse-WebSocket relay channel for incoming MCP tool calls lands in v1.0.0a7+.

Discovered contract reference: docs/fleetq-bridge-contract.md.

Releasing

PyPI publishing is automated via Trusted Publishing (OIDC) — no API tokens in the repo. Tag-pushes to v* trigger .github/workflows/publish.yml. Setup steps and the release checklist live in docs/publishing.md.

License

MIT — see LICENSE.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

harbormaster_mcp-1.0.0a11.tar.gz (241.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

harbormaster_mcp-1.0.0a11-py3-none-any.whl (51.9 kB view details)

Uploaded Python 3

File details

Details for the file harbormaster_mcp-1.0.0a11.tar.gz.

File metadata

  • Download URL: harbormaster_mcp-1.0.0a11.tar.gz
  • Upload date:
  • Size: 241.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for harbormaster_mcp-1.0.0a11.tar.gz
Algorithm Hash digest
SHA256 e9fa14ffc327dde9c4df748ec3e094363aadcfed5ffc70e9dc26b939aa3d980f
MD5 e7d15e84c3bd0013fdcb78e5e43c0587
BLAKE2b-256 a02afa5dda739047e1c367f99895f4fa7d7b60072624ac2adab986ae065d731c

See more details on using hashes here.

Provenance

The following attestation bundles were made for harbormaster_mcp-1.0.0a11.tar.gz:

Publisher: publish.yml on FleetQ/harbormaster

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file harbormaster_mcp-1.0.0a11-py3-none-any.whl.

File metadata

File hashes

Hashes for harbormaster_mcp-1.0.0a11-py3-none-any.whl
Algorithm Hash digest
SHA256 170fa119731cf81495db65c53e8af202a26dc5785e8e2648d588878931a08368
MD5 40abf23d03c5cc763459b3cedb2f53bd
BLAKE2b-256 a680b305b33a317a42c849a03f601e9789773e5b831cc3ee31e0f4f599de5163

See more details on using hashes here.

Provenance

The following attestation bundles were made for harbormaster_mcp-1.0.0a11-py3-none-any.whl:

Publisher: publish.yml on FleetQ/harbormaster

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page