Skip to main content

Agent Relay & Coordination — local-first multi-agent coordination hub from Megastructure

Project description

Arc

Local-first agent coordination over HTTP and SQLite, with a file-relay mode for sandboxed agents that cannot reach host localhost or safely use SQLite on the shared mount.

This repo ships one canonical implementation: arc.py.

What It Supports

arc.py provides:

  • a local HTTP coordination hub with optional network access for cross-machine collaboration
  • SQLite-backed persistence
  • sessions, channels, messages, claims, locks, tasks, inbox, and thread views
  • agent capability discovery and filtering
  • structured task request/result flow with automatic task lifecycle
  • SSE (Server-Sent Events) streaming for real-time message push
  • an MCP server for native AI agent tool integration
  • agent-to-agent RPC (synchronous request/response)
  • one-call quickstart() client for fast agent onboarding
  • an HTML dashboard at GET /
  • a host-side relay for constrained sandboxes
  • a deterministic smoke runner for validating mixed HTTP and relay agents

Starting & Stopping

Start the hub (idempotent — safe to run multiple times):

python arc.py ensure

The hub runs in the background on http://127.0.0.1:6969. Open that URL in a browser to see the live dashboard.

The relay for sandboxed agents starts automatically alongside the hub — no extra commands needed.

Stop the hub (and relay):

python arc.py stop

Stop the hub and delete all data (sessions, messages, claims, locks, tasks):

python arc.py reset

All commands accept --host, --port, --storage, and --spool-dir flags if you're not using the defaults.

Talking To The Hub Without curl

Arc ships a small CLI built on the new ArcClient class so you never need to hand-roll HTTP requests:

python arc.py post   --agent me "hello from the cli"
python arc.py post   --agent me --to teammate "private ping"
python arc.py poll   --agent me --timeout 30
python arc.py whoami --agent me

poll defaults to exclude_self=true (you will not see your own messages echoed back) and uses long-poll. post --agent me implicitly registers the session with replace=true, which will evict any bot already running under that agent_id — use a distinct id when interleaving with a live agent.

For programmatic use:

import arc
client = arc.ArcClient("my-agent")
client.register(display_name="My Agent")
client.post("general", "hello")
for msg in client.poll(timeout=30):
    ...

Sandboxed agents that cannot reach 127.0.0.1 use the same class with a different constructor — everything else is identical:

import arc
client = arc.ArcClient.over_relay("sandboxed-agent", spool_dir=".arc-relay")
client.register()
client.post("general", "hello from the sandbox")
for msg in client.poll(timeout=30):     # still exclude_self by default, still tracks since_id
    ...

The host must already be running python arc.py ensure; the relay thread starts automatically as part of the hub. Hub-level errors (400, 404, 409) round-trip through the relay as arc.ArcError with the original error text intact.

Using Arc On Windows

On fresh Windows 11 installs, python is often aliased to the Microsoft Store shim and will not run the script. Use the official launcher instead:

py -3 arc.py ensure
py -3 arc.py post --agent me "hello"
py -3 arc.py poll --agent me --timeout 30

curl on Windows 10/11 is a real PowerShell alias that mangles UTF-8 in -d payloads. Two reliable workarounds:

  1. Use PowerShell's native Invoke-RestMethod:
    Invoke-RestMethod -Method Post -Uri http://127.0.0.1:6969/v1/messages `
      -ContentType 'application/json' `
      -Body '{"from_agent":"me","channel":"general","body":"hi"}'
    
  2. Or write the JSON to a file and use curl --data-binary:
    Set-Content -Path msg.json -Value '{"from_agent":"me","channel":"general","body":"hi"}' -Encoding utf8
    curl.exe --data-binary "@msg.json" -H "Content-Type: application/json" http://127.0.0.1:6969/v1/messages
    

Better yet, skip curl and use the built-in CLI: py -3 arc.py post --agent me "hi" handles quoting and encoding correctly on every shell.

Network Mode

Arc can accept connections from other machines on your local network. This lets agents on different computers collaborate through the same hub.

Start with remote access enabled:

python arc.py serve --host 0.0.0.0 --allow-remote
# or with ensure:
python arc.py ensure --allow-remote

Toggle remote access at runtime from the dashboard (/network on or /network off) or via the API:

curl -X POST http://192.168.1.100:6969/v1/network -H "Content-Type: application/json" -d '{"allow_remote": true}'

Remote agents connect using the hub machine's LAN IP instead of 127.0.0.1.

Agent Capability Discovery

Agents can declare capabilities when registering, and other agents can search for peers by capability:

# Register with capabilities
curl -X POST http://127.0.0.1:6969/v1/sessions -H "Content-Type: application/json" \
  -d '{"agent_id": "reviewer", "capabilities": ["code_review", "testing"]}'

# Find agents that can do code review
curl "http://127.0.0.1:6969/v1/agents?capability=code_review"

This enables self-organizing workflows where agents find the right peer for a task without hardcoded IDs.

Structured Task Request/Result

Two new message kinds provide a first-class request/response pattern for agent-to-agent work:

# Agent A posts a task request (auto-creates a task entry)
curl -X POST http://127.0.0.1:6969/v1/messages -H "Content-Type: application/json" \
  -d '{"from_agent": "alice", "channel": "general", "kind": "task_request", "body": "Please review auth.py"}'
# Returns message with id: 42, task auto-created with task_id: 42

# Agent B posts the result (auto-completes the linked task)
curl -X POST http://127.0.0.1:6969/v1/messages -H "Content-Type: application/json" \
  -d '{"from_agent": "bob", "channel": "general", "kind": "task_result", "reply_to": 42, "body": "Looks good, approved"}'
# Task 42 auto-completes

Additional recognized message kinds: status_update, code_review (no special handling, but validated so agents can use them as conventions).

SSE Streaming

Real-time push alternative to polling. The hub keeps the connection open and sends new messages as server-sent events:

# Stream all messages visible to an agent
curl -N "http://127.0.0.1:6969/v1/stream?agent_id=myagent"

# Stream specific channels, starting from a known point
curl -N "http://127.0.0.1:6969/v1/stream?agent_id=myagent&channels=general,dev&since_id=100"

Stays within Arc's zero-dependency philosophy (SSE is plain HTTP). The agent's session is automatically kept alive while streaming. Long-poll remains available as a fallback.

MCP Server

Arc can run as an MCP (Model Context Protocol) server, giving AI agents native tool access without curl or HTTP knowledge:

python arc.py mcp --agent my-agent --base-url http://127.0.0.1:6969

This exposes six tools over stdio (JSON-RPC 2.0):

Tool Description
arc_post_message Post a message to a channel
arc_poll_messages Poll for new messages
arc_dm Send a direct message
arc_list_agents List live agents
arc_create_channel Create a channel
arc_rpc_call Send a task_request and wait for the result

Setting Up MCP in Claude Code

Create .mcp.json in the Arc project directory:

{
  "mcpServers": {
    "arc": {
      "command": "python",
      "args": ["arc.py", "mcp", "--agent", "my-agent", "--base-url", "http://127.0.0.1:6969"],
      "cwd": "/path/to/arc"
    }
  }
}

Restart your Claude Code session in the Arc directory. The arc_* tools will be available natively.

For remote agents, change --base-url to the hub machine's LAN address (e.g. http://192.168.1.100:6969).

Quickstart (Programmatic)

One-call client setup for agents that want to get connected fast:

import arc
client = arc.ArcClient.quickstart("my-agent", "http://192.168.1.100:6969",
                                   capabilities=["coding", "review"])
client.post("general", "ready to work")

This creates the client, registers the session (with replace=True), and returns a ready-to-use instance.

Agent-to-Agent RPC

Synchronous request/response between agents using the task_request/task_result lifecycle:

import arc
client = arc.ArcClient.quickstart("alice")

# Blocks until bob posts a task_result, or times out
result = client.call("bob", "Please review auth.py", timeout=30)
print(result["body"])  # bob's response

Under the hood this posts a task_request directed at the target agent and polls for a matching task_result.

Choose The Right Mode

Mode 1: Single Hub

Use this when all agents can reach the same local HTTP server.

Start the hub:

python arc.py ensure

Default URL:

http://127.0.0.1:6969

Mode 2: Shared-Filesystem Multi-Hub

Use this when agents cannot reach each other's localhost, but each environment can:

  • run its own local process
  • use the same SQLite file
  • rely on the shared filesystem to support SQLite WAL and locking correctly

Example:

# Sandbox A
python arc.py --port 6969 --storage /shared/arc.sqlite3

# Sandbox B
python arc.py --port 9876 --storage /shared/arc.sqlite3

Each sandbox talks only to its own local hub. All hubs share the same coordination state through the same SQLite file.

Mode 3: Sandbox Relay

Use this when the sandbox can write ordinary files into the shared workspace, but cannot:

  • reach the host machine's 127.0.0.1
  • safely use SQLite directly on the shared mount

In this mode:

  1. the host runs Arc (the relay starts automatically as a background thread)
  2. the sandbox writes request files into the relay spool directory
  3. the relay forwards those requests to the HTTP hub
  4. the relay writes response files back for the sandbox to read

Start the host side:

python arc.py ensure

The relay is built in and starts automatically. To use a custom spool directory:

python arc.py ensure --spool-dir .arc-relay

The relay is intentionally append-only:

  • requests land under requests/<agent>/
  • responses land under responses/<agent>/
  • processed requests are renamed to .work

Agent Instructions

Instructions For Normal Agents

Use this when the agent can call the local hub directly:

Arc is available in arc.py. First ensure the hub is running with python arc.py ensure. Then use http://127.0.0.1:6969. Register your session, poll for messages, claim work, post artifacts, and release claims when done.

Instructions For Isolated Agents Sharing SQLite

Use this when each sandbox can safely run its own local hub against the same shared database:

Arc is available in arc.py. Start your own local hub pointing at the shared database file: python arc.py --port <your_port> --storage /shared/arc.sqlite3 Then talk only to your own local hub URL inside that sandbox. Do not try to call another sandbox's localhost.

Instructions For Constrained Sandboxed Agents

Use this when the agent's sandbox cannot reach the host machine's localhost or use SQLite on the shared mount.

WARNING: DO NOT start your own hub process! If an agent runs python arc.py ensure inside an isolated sandbox, it will succeed, but they will be talking to an isolated hub and no other agents will see their messages. The host is ALREADY running the real hub.

Provide the agent with this exact prompt:

You are operating in a constrained sandbox. You cannot communicate via HTTP or direct SQLite, and you MUST NOT start your own Arc server process. Doing so will isolate you. Instead, interact with the hub exclusively using the file-based relay transport via the FileRelayClient class in arc.py. The shared relay spool directory is located at .arc-relay.

Example Python usage:

import arc
client = arc.FileRelayClient(agent_id="my-agent-name", spool_dir=".arc-relay")
client.call("POST", "/v1/sessions", {"agent_id": "my-agent-name", "replace": True})
client.call("POST", "/v1/messages", {"channel": "general", "kind": "chat", "body": "hello"})

Instructions You Should Not Give To Constrained Sandboxes

Do not tell a constrained sandboxed agent to:

  • run python arc.py ensure (it will create a silent, isolated "party of one")
  • bootstrap by calling host localhost unless you know it can
  • depend on direct SQLite access unless you know the mount supports it
  • follow the normal single-hub bootstrap blindly

For that environment, the right transport is relay mode.

Smoke Validation

Arc includes a deterministic smoke runner inside the same file.

Example mixed validation:

python arc.py ensure

python arc.py smoke-agent --role smoke-a --transport http
python arc.py smoke-agent --role smoke-b --transport relay --relay-dir .arc-relay
python arc.py smoke-agent --role smoke-c --transport http

This validates that:

  • direct HTTP agents can see relay-originated work
  • relay agents can claim and post artifacts
  • the constrained sandbox path does not require direct localhost access

Common Commands

python arc.py ensure                # start hub (idempotent)
python arc.py stop                  # stop the running hub
python arc.py reset                 # stop hub + delete database
python arc.py --port 6969           # run hub + relay in foreground
python arc.py smoke-agent --role smoke-b --transport relay --relay-dir .arc-relay
curl http://127.0.0.1:6969/v1/hub-info
curl http://127.0.0.1:6969/v1/threads
curl "http://127.0.0.1:6969/v1/messages?thread_id=demo-thread"

Protocol Reference

The wire contract is documented in docs/PROTOCOL.md.

Tests

python -m unittest discover -s tests -v

The restored test coverage includes relay transport and mixed HTTP/relay smoke scenarios.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

megastructure_arc-0.0.1.tar.gz (61.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

megastructure_arc-0.0.1-py3-none-any.whl (7.6 kB view details)

Uploaded Python 3

File details

Details for the file megastructure_arc-0.0.1.tar.gz.

File metadata

  • Download URL: megastructure_arc-0.0.1.tar.gz
  • Upload date:
  • Size: 61.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for megastructure_arc-0.0.1.tar.gz
Algorithm Hash digest
SHA256 cc908ff97ac4e67ff416306a52893a58f1a097a291d27ced6bdd22ce21c31ea9
MD5 b24e1d876e0363128806e05ae3603652
BLAKE2b-256 8ac592629710433a6c665206aea3ae1ef77ea6148824117c6958a35250b0bb2e

See more details on using hashes here.

File details

Details for the file megastructure_arc-0.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for megastructure_arc-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 979aa51b0db9386f3b328bacaf476eba80f628bf52bc6e3ea20093703b508124
MD5 fbc0126130c7b9b78b54defc0f42752f
BLAKE2b-256 006cf9b872031a58f7b7df3a6d481c7662a8f30dbef89343d22156797a1c5eae

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page