Skip to main content

Multi-agent AI collaboration framework (Claude, Kimi, Gemini, Codex, and more)

Project description

AgentWeave

Python Version License: MIT PyPI version

A collaboration framework for N AI agents โ€” Claude, Kimi, Gemini, Codex, Minimax, GLM, and more

๐Ÿ“– Documentation

AgentWeave lets multiple AI agents work together on the same project through a shared protocol. The AgentWeave Hub is a self-hosted server with a web dashboard โ€” the recommended way to run it.


Quick Start โ€” AgentWeave Hub (Recommended)

The Hub provides a web dashboard, REST + SSE + MCP interfaces, and real-time visibility into agent activity.

Step 1 โ€” Start the Hub (Docker)

# Download the two config files
curl -O https://raw.githubusercontent.com/gutohuida/AgentWeave/master/hub/docker-compose.yml
curl -O https://raw.githubusercontent.com/gutohuida/AgentWeave/master/hub/.env.example

# Create your .env
cp .env.example .env

Open .env and set your API key:

# Generate a secure key
python -c "import secrets; print('aw_live_' + secrets.token_hex(16))"

Paste the output as AW_BOOTSTRAP_API_KEY in .env, then start the Hub:

docker compose up -d

The Hub is now running at http://localhost:8000 โ€” open it in your browser to see the dashboard.


Step 2 โ€” Install the CLI

pip install "agentweave-ai[mcp]"

Step 3 โ€” Initialize your project

cd /path/to/your-project
agentweave init --project "My App" --agents claude,kimi

This creates AI_CONTEXT.md (fill it in once: stack, architecture, standards) and .agentweave/ with agent roles and shared context.


Step 4 โ€” Connect the CLI to the Hub

agentweave transport setup --type http \
  --url http://localhost:8000 \
  --api-key aw_live_<your-key> \
  --project-id proj-default

Step 5 โ€” Register the MCP server and start the watchdog

# Register MCP with all session agents (one command)
agentweave mcp setup

# Start the background watchdog (one terminal, all agents)
agentweave start
# Stop later with: agentweave stop

Restart your Claude / Kimi sessions so they pick up the new MCP server. That's it โ€” agents communicate through the Hub and you monitor everything in the dashboard.


What the Dashboard Shows

Open http://localhost:8000 to see:

  • Tasks board โ€” all tasks with status, priority, assignee, requirements, acceptance criteria, and deliverables (click any card to expand)
  • Messages feed โ€” inter-agent messages with expand-to-read for long content; message type and linked task shown inline
  • Human questions โ€” questions agents have asked you; answer directly in the dashboard
  • Agent activity โ€” live event stream and per-agent output log
  • Agent cards โ€” connected agents auto-discovered from your session; shows role, yolo mode, and per-agent chat history

Configuration โ€” .env reference

Variable Default Description
AW_BOOTSTRAP_API_KEY (required) API key auto-created on first start (aw_live_โ€ฆ)
AW_BOOTSTRAP_PROJECT_ID proj-default Default project ID
AW_BOOTSTRAP_PROJECT_NAME Default Project Display name for the default project
AW_PORT 8000 Port the Hub listens on
AW_CORS_ORIGINS (empty) Comma-separated allowed origins for CORS (leave empty in production)
DATABASE_URL sqlite+aiosqlite:///data/agentweave.db SQLite path inside the container

Data persists in a Docker volume (hub-data) โ€” no manual backup needed for local use.


Alternative Modes

Mode Setup Best for
Hub Docker + agentweave transport setup --type http Teams, multi-machine, web dashboard (recommended)
Zero-relay MCP agentweave mcp setup + watchdog Autonomous loops, same machine, no server
Manual relay Zero โ€” just install Quick one-off delegation

Zero-relay MCP (no Hub)

pip install "agentweave-ai[mcp]"
cd your-project/
agentweave init --project "My App" --agents claude,kimi
agentweave mcp setup   # configure MCP in agent settings
agentweave start       # start background watchdog

Manual relay (simplest possible)

pip install agentweave-ai
cd your-project/
agentweave init --project "My App" --agents claude,kimi
# Ask Claude to delegate; it runs agentweave quick + relay and gives you a prompt to paste into Kimi

Claude-Proxy Agents (Minimax, GLM, and any OpenAI-compatible provider)

Some models โ€” like MiniMax and Zhipu GLM โ€” don't have a native CLI. AgentWeave runs them through the Claude Code CLI by overriding two environment variables:

ANTHROPIC_BASE_URL  โ†’  the provider's OpenAI-compatible endpoint
ANTHROPIC_API_KEY   โ†’  the provider's API key (resolved from your shell at runtime)

This is called a claude_proxy runner. AgentWeave tracks a separate Claude resume session ID per proxy agent so each one maintains its own conversation history.

Setup

# 1. Initialize with a proxy agent
agentweave init --project "My App" --agents claude,minimax

# 2. Configure the runner (uses built-in defaults for minimax and glm)
agentweave agent configure minimax

# Or supply custom values for any OpenAI-compatible provider:
agentweave agent configure mymodel \
  --runner claude_proxy \
  --base-url https://api.example.com/v1 \
  --api-key-var MY_MODEL_API_KEY

Running a proxy agent

# Option A โ€” switch env vars in your current shell, then run Claude manually
export MINIMAX_API_KEY=<your-key>
eval $(agentweave switch minimax)          # exports ANTHROPIC_BASE_URL + ANTHROPIC_API_KEY
claude --resume <session-id> -p "..."

# Option B โ€” let AgentWeave handle it (sets env + launches Claude with relay prompt)
export MINIMAX_API_KEY=<your-key>
agentweave run --agent minimax

# Option C โ€” from relay (shows switching instructions, or auto-runs with --run)
agentweave relay --agent minimax           # shows copy-paste prompt + switching instructions
agentweave relay --agent minimax --run     # combined: sets env + launches Claude

Session continuity

AgentWeave tracks the Claude session ID per proxy agent so --resume is used automatically on subsequent runs. To register a session ID manually (e.g. from claude --list):

agentweave agent set-session minimax <session-id>

Security note: API keys are never stored in session.json. Only the env var name is stored (e.g. MINIMAX_API_KEY). The actual value is resolved from your shell at runtime.

Built-in provider registry

Agent Base URL Env var
minimax https://api.minimax.chat/v1 MINIMAX_API_KEY
glm https://open.bigmodel.cn/api/paas/v4 ZHIPU_API_KEY

Cross-Machine Collaboration

Via Git (no server required)

agentweave transport setup --type git --cluster yourname

Creates an orphan branch (agentweave/collab) on your git remote. Messages sync through git plumbing โ€” working tree and HEAD are never touched. Both developers need access to the same remote.

Via Hub (recommended for teams)

Deploy the Hub once, connect all agents via HTTP transport. The dashboard shows all messages, tasks, and human questions in real time.


Commands Reference

Session

agentweave init --project "Name" --agents claude,kimi
agentweave status
agentweave summary

Delegation

agentweave quick --to kimi "Task description"
agentweave relay --agent kimi
agentweave relay --agent minimax --run     # auto-run for claude_proxy agents
agentweave inbox --agent claude

Agent runner (claude_proxy setup)

agentweave agent configure minimax                      # use built-in defaults
agentweave agent configure glm                          # use built-in defaults
agentweave agent configure mymodel \                    # custom OpenAI-compatible provider
  --runner claude_proxy \
  --base-url https://api.example.com/v1 \
  --api-key-var MY_MODEL_API_KEY
agentweave agent set-session minimax <session-id>       # register Claude resume ID manually

agentweave switch minimax        # output eval-able export commands
agentweave run --agent minimax   # set env vars + launch Claude with relay prompt

Tasks

agentweave task list
agentweave task show <task_id>
agentweave task update <task_id> --status in_progress
agentweave task update <task_id> --status completed
agentweave task update <task_id> --status approved
agentweave task update <task_id> --status revision_needed --note "Fix X"

Transport

agentweave transport setup --type http --url ... --api-key ... --project-id ...
agentweave transport setup --type git --cluster yourname
agentweave transport status
agentweave transport pull
agentweave transport disable

Yolo mode

agentweave yolo --agent claude --enable    # allow agent to act without confirmations
agentweave yolo --agent claude --disable   # re-enable confirmation prompts

Human interaction (Hub only)

agentweave reply --id <question_id> "Your answer"

MCP Tools Reference

Available to agents in both local MCP mode and via Hub MCP:

Tool What it does
send_message(from, to, subject, content) Send a message to another agent
get_inbox(agent) Read unread messages
mark_read(message_id) Archive a message after processing
list_tasks(agent?) List active tasks
get_task(task_id) Get full task details
update_task(task_id, status) Update task status
create_task(title, ...) Create and assign a new task
get_status() Session-wide summary + task counts
ask_user(from_agent, question) Post a question to the human (Hub only)
get_answer(question_id) Check if the human answered (Hub only)

Task Status Lifecycle

pending โ†’ assigned โ†’ in_progress โ†’ completed โ†’ under_review โ†’ approved
                                             โ†˜ revision_needed (loops back)
                                             โ†˜ rejected

Build from Source

git clone https://github.com/gutohuida/AgentWeave.git
cd AgentWeave/hub

cp .env.example .env
# Edit .env: set AW_BOOTSTRAP_API_KEY

docker compose up --build -d

Hub UI development (hot-reload)

cd hub/ui
npm install
npm run dev      # dashboard at http://localhost:5173, proxies /api โ†’ Hub at localhost:8000

Repository Layout

AgentWeave/
โ”œโ”€โ”€ src/agentweave/     CLI package (Python 3.8+, zero runtime deps) โ€” v0.18.0
โ”œโ”€โ”€ hub/                AgentWeave Hub server (Python 3.11+, FastAPI + Docker) โ€” v0.12.0
โ”‚   โ”œโ”€โ”€ hub/            Hub Python package
โ”‚   โ”œโ”€โ”€ ui/             React dashboard (built into Docker image, no separate server)
โ”‚   โ””โ”€โ”€ Dockerfile      Multi-stage build: Node UI โ†’ Python server
โ”œโ”€โ”€ docs/               Additional documentation
โ”œโ”€โ”€ tests/              CLI unit tests (pytest)
โ””โ”€โ”€ Makefile            Convenience targets for both packages

Development

# CLI
pip install -e ".[dev]"
ruff check src/
black src/
mypy src/
pytest tests/ -v

# Hub
cd hub
pip install -e ".[dev]"
make ui-build    # rebuild React UI
pytest tests/ -v

# Both
make install-all
make test-all
make lint

Roadmap

Phase Status Description
Local transport โœ… Done Single-machine via .agentweave/ filesystem
Git transport โœ… Done (v0.2.0) Cross-machine via orphan branch, zero infra
N-agent support โœ… Done (v0.3.0) Multi-agent teams with roles.json and cluster naming
Local MCP server โœ… Done (v0.4.0) Native tool integration, zero-relay with watchdog pinger
HTTP transport โœ… Done (v0.5.0) CLI โ†” Hub via REST
AgentWeave Hub โœ… Done (v0.2.0) Self-hosted server, REST + SSE + MCP + web dashboard
Hub UI โœ… Done (v0.2.1) React dashboard โ€” expandable tasks/messages, agent trigger, configurator
Per-agent context templates โœ… Done (v0.6.0) claude_context.md, kimi_context.md, collab_protocol.md
Session sync to Hub โœ… Done (v0.10.0) Watchdog pushes session.json to Hub on startup; agents auto-appear in dashboard
Yolo mode โœ… Done (v0.10.0) Per-agent flag to suppress confirmation prompts for autonomous loops
Claude-proxy agents โœ… Done (v0.12.0) Run Minimax, GLM, and any OpenAI-compatible provider via Claude CLI proxy
Multi-role support โœ… Done (v0.15.0) Multiple roles per agent with agentweave roles CLI and Hub sync
Official hosted Hub ๐Ÿ”ฒ Planned Public hub.agentweave.dev โ€” Supabase + Vercel + Railway

FAQ

Q: Do I need the Hub? No. Manual relay and local MCP modes work with zero infra. The Hub adds a web dashboard, multi-machine support, and human question-answering.

Q: Should I put the UI in a separate folder/repo? No. The UI (hub/ui/) is built into the Docker image and served by the Hub at the same port. No second server or CORS config needed in production.

Q: Do I need to run CLI commands during my session? No. After agentweave init, just talk to Claude. It runs all agentweave commands via Bash automatically.

Q: Do the watchdog processes need to stay running? Yes (in local MCP mode or Hub mode). Run agentweave start once. If they stop, messages still queue โ€” agents just won't be auto-triggered.

Q: Should I commit .agentweave/? Partially. Runtime state (tasks, messages, session.json, transport.json) is gitignored. AGENTS.md and README.md are committed.

Q: Do both developers need the same git remote for git transport? Yes. Git transport requires a shared remote (e.g. origin).

Q: How do I use Minimax or GLM โ€” they don't have a CLI? Use agentweave agent configure minimax (or glm) to set them up as claude_proxy agents. AgentWeave runs them through the Claude CLI with overridden env vars (ANTHROPIC_BASE_URL + ANTHROPIC_API_KEY). Export your provider key, then run agentweave run --agent minimax. See the Claude-Proxy Agents section above.

Q: Can I use any OpenAI-compatible provider as an agent? Yes. Use agentweave agent configure <name> --runner claude_proxy --base-url <url> --api-key-var <VAR>. The Claude CLI will proxy requests to that endpoint using your existing API key env var.


Links


MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentweave_ai-0.18.0.tar.gz (129.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentweave_ai-0.18.0-py3-none-any.whl (147.9 kB view details)

Uploaded Python 3

File details

Details for the file agentweave_ai-0.18.0.tar.gz.

File metadata

  • Download URL: agentweave_ai-0.18.0.tar.gz
  • Upload date:
  • Size: 129.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for agentweave_ai-0.18.0.tar.gz
Algorithm Hash digest
SHA256 f0ab7f8282c9ba020dbfac61ba1403daeccbb08dd89218a6d4784bc5e0016cef
MD5 e75af8fc53957205b422ffcfc18fa756
BLAKE2b-256 c68f291cd24a6b6af8a621b2908a901de7f815bf37b1aade7d544042948bee55

See more details on using hashes here.

File details

Details for the file agentweave_ai-0.18.0-py3-none-any.whl.

File metadata

  • Download URL: agentweave_ai-0.18.0-py3-none-any.whl
  • Upload date:
  • Size: 147.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for agentweave_ai-0.18.0-py3-none-any.whl
Algorithm Hash digest
SHA256 eaf062185723fd799e4160906a180bbf195ff480edcdc6934ef35235a8e1039b
MD5 3a7b23d3487fa2e5c18cac3ecc9306fc
BLAKE2b-256 66a375cd9538d7c337a88272925dee08a0a149b04c9c41221905755e454d4974

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page