Framework for building multi-agent LangGraph pipelines with Claude Code skills
Project description
ArcCrew
Multi-agent LangGraph pipelines — scaffold, build, and ship in minutes.
ArcCrew is a Python framework for building production-ready multi-agent pipelines on LangGraph. It ships with a CLI, a 3-layer prompt system, a FastAPI server, an MCP server, and AI coding skills that let you generate entire pipelines from a description.
Install
pip install arccrew
Quick start
# 1. Scaffold a new project
arccrew init my-project
cd my-project
# 2. Set your API key
cp .env.example .env
# edit .env → set ANTHROPIC_API_KEY
# 3. Verify everything is configured
arccrew check
# 4. Open in Claude Code and run:
# /build-agents ← describe your pipeline, get all files generated
Or build manually:
# agents/researcher.py
from arccrew import BaseAgent, track_timing
from arccrew.tools import get_research_tools
from langgraph.types import Command
from pathlib import Path
class ResearcherAgent(BaseAgent):
def __init__(self):
super().__init__(name="researcher", prompts_dir=Path("prompts"))
@property
def system_prompt(self) -> str:
return self.get_prompt_manager().assemble_prompt("researcher")
@track_timing
async def execute(self, state: dict) -> Command:
task = state["tasks"][state["current_task_index"]]["description"]
result = await self.run_react(task=task, tools=get_research_tools())
return Command(goto="writer", update={"context": self.extract_json(result)})
# pipeline.py
from arccrew import create_pipeline, PipelineState
from arccrew.api.deps import pipeline_registry
from arccrew.mcp_server import register_pipeline
from agents.researcher import ResearcherAgent
def create_my_pipeline():
researcher = ResearcherAgent()
return create_pipeline(
state_class=PipelineState,
nodes={"researcher": lambda s: researcher.execute(s)},
flow=["researcher"],
)
pipeline_registry.register("my_pipeline", create_my_pipeline)
register_pipeline("my_pipeline", create_my_pipeline)
arccrew serve # REST API on :8000
arccrew serve-mcp # MCP server for Claude Desktop and other MCP clients
Once the server is running, open http://localhost:8000/playground to explore your pipelines interactively, or call them via API:
# Simple text input
curl -X POST http://localhost:8000/api/runs \
-H "Content-Type: application/json" \
-d '{"pipeline": "my_pipeline", "input": "write a blog post about AI"}'
# Structured input (dict — auto-serialized, keys available as metadata)
curl -X POST http://localhost:8000/api/runs \
-H "Content-Type: application/json" \
-d '{"pipeline": "my_pipeline", "input": {"topic": "AI", "tone": "casual", "words": 400}}'
# Poll for results
curl http://localhost:8000/api/runs/{run_id}
# Stream events as the pipeline runs (SSE)
curl -N http://localhost:8000/api/runs/{run_id}/stream
Features
- 3-layer prompt system — library base + your project globals + per-agent role
- AI coding skills — pre-installed in every scaffolded project, work natively in Claude Code
- Built-in tools — web search, file management, shell execution, MCP server adapters, ready to plug into any agent
- FastAPI server — non-blocking REST API (
POST /api/runs→ 202 +run_id, poll withGET /api/runs/{id}), SSE streaming, WebSocket, interactive playground at/playground - MCP dual role — expose your pipelines as MCP tools AND connect external MCP servers (GitHub, Notion, Slack…) as agent tools
- Multi-provider — Claude by default, any LangChain-supported provider via env var
- Supervisor pattern — LLM-driven routing as an alternative to manual graph wiring
- Retry / verification loops — built-in worker → verifier → retry pattern
Prompt layers
Every agent's system prompt is assembled in this order:
| Layer | File | Who controls it |
|---|---|---|
| Base | bundled in arccrew | library — universal agent rules |
| Global | prompts/global.md |
you — project-wide rules (tone, domain, language) |
| Agent | prompts/{agent}.md |
you — role, tools, output schema |
Built-in tools
Every agent has access to these tools out of the box:
from arccrew.tools import get_research_tools, create_workspace_tools, get_mcp_tools
# Research tools — web_search (DuckDuckGo)
tools = get_research_tools()
# Workspace tools — write_file, read_file, list_files, run_shell
from pathlib import Path
tools = create_workspace_tools(Path("workspace"))
# MCP tools — connect any MCP server (GitHub, Notion, Slack, custom…)
# Requires: pip install "arccrew[mcp]"
import os
tools = await get_mcp_tools({
"github": {
"transport": "sse",
"url": "https://api.githubcopilot.com/mcp/",
"headers": {"Authorization": f"Bearer {os.getenv('GITHUB_TOKEN')}"},
}
})
Combine freely in any agent:
result = await self.run_react(
task=task,
tools=(
get_research_tools()
+ create_workspace_tools(Path("workspace"))
+ await get_mcp_tools({"github": GITHUB_MCP})
)
)
Adding your own tools
Create a file in tools/ named after your domain:
# tools/calendar_tools.py
from langchain_core.tools import tool
@tool
async def get_availability(date: str) -> str:
"""Check calendar availability for a given date (YYYY-MM-DD).
Use for: tasks that require checking free/busy slots.
Do NOT use for: general research (use web_search instead).
Args:
date: Date to check in YYYY-MM-DD format.
Returns:
Available time slots as a formatted string.
"""
try:
# your implementation here
return f"Available slots for {date}: 9am, 2pm, 4pm"
except Exception as e:
return f"ERROR: {e}"
def get_calendar_tools() -> list:
return [get_availability]
Combine with arccrew built-ins in any agent:
from arccrew.tools import get_research_tools
from tools.calendar_tools import get_calendar_tools
result = await self.run_react(
task=task,
tools=get_research_tools() + get_calendar_tools(),
)
Use /add-tool in Claude Code to generate a new tool from a description.
Environment variables
All configuration lives in .env. Copy .env.example after scaffolding:
# LLM provider (pick one)
ANTHROPIC_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here
GROQ_API_KEY=your_key_here
GOOGLE_API_KEY=your_key_here
# Model (default for all agents)
AGENT_MODEL=anthropic/claude-haiku-4-5-20251001
# Per-agent overrides
RESEARCHER_MODEL=anthropic/claude-sonnet-4-6
RESEARCHER_MAX_ROUNDS=10
WRITER_MAX_ROUNDS=5
# Workspace (where agents write files)
WORKSPACE_DIR=workspace
# API server
API_HOST=0.0.0.0
API_PORT=8000
API_AUTH_ENABLED=false # set true + API_SECRET_KEY for production
# Observability (optional)
LANGSMITH_API_KEY=your_key_here
LANGSMITH_PROJECT=my-project
LANGSMITH_TRACING=true
Skills
Every project created with arccrew init gets skills pre-installed in .claude/commands/ and detailed references in skills/. They work natively in Claude Code as slash commands.
After upgrading arccrew, run arccrew sync-skills to get new and updated skills without touching your project files.
Core
| Skill | What it does |
|---|---|
/build-agents |
Generate a full pipeline from a description |
/add-agent |
Add a single agent to an existing pipeline |
/add-tool |
Add a tool to an agent |
/add-state-field |
Add a custom state field with the right reducer |
/add-prompt |
Add or update an agent prompt |
Patterns
| Skill | What it does |
|---|---|
/add-retry-loop |
Add retry + verification loop |
/add-review-gate |
Add a human-in-the-loop review gate |
/add-supervisor |
Add LLM-driven supervisor orchestration |
Infrastructure
| Skill | What it does |
|---|---|
/add-api-endpoint |
Add a REST endpoint to the FastAPI server |
/add-mcp-pipeline |
Register a pipeline as an MCP tool |
Configuration
| Skill | What it does |
|---|---|
/configure-claude |
Configure Claude as LLM provider |
/configure-openai |
Configure OpenAI as LLM provider |
/configure-gemini |
Configure Gemini as LLM provider |
/switch-provider |
Switch between LLM providers |
/configure-mcp |
Connect external MCP servers as agent tools |
Quality
| Skill | What it does |
|---|---|
/enable-langsmith |
Set up LangSmith tracing |
/enable-otel |
Set up OpenTelemetry (Grafana, Datadog, Jaeger…) |
/debug-pipeline |
Diagnose pipeline errors |
/write-tests |
Generate tests for agents and tools |
CLI
arccrew init <name> # scaffold a new project
arccrew check # verify config and dependencies
arccrew sync-skills # update skills after upgrading
arccrew run "describe your task" # run a pipeline from the terminal
arccrew run "task" --pipeline my_name # run a specific pipeline
arccrew visualize # print Mermaid diagram of your graph
arccrew visualize -o graph.png # save diagram as PNG (requires pyppeteer)
arccrew visualize -o graph.mmd # save raw Mermaid code
arccrew serve # start FastAPI server
arccrew serve-mcp # start MCP server (stdio)
Local development
To work on arccrew itself and test changes immediately:
git clone https://github.com/amonrreal/arccrew
cd arccrew
# Install in editable mode — changes take effect without reinstalling
pip install -e ".[dev]"
cp .env.example .env
# Set ANTHROPIC_API_KEY (or another provider) in .env
# Verify the CLI uses your local version
arccrew --help
# Run the built-in example
python -m examples.researcher_writer.pipeline
python -m examples.researcher_writer.pipeline --supervisor
# Scaffold a test project and try the new commands
cd /tmp
arccrew init test-project
cd test-project
cp .env.example .env
# edit .env → set API key
arccrew check
# → now requires a pipeline.py in the current directory ✓
# Run tests
cd /path/to/arccrew
pytest
pytest tests/test_base_agent.py -v
pytest -k "test_name"
MCP server
Expose your pipelines as tools in any MCP-compatible client (Claude Desktop, Claude Code):
arccrew serve-mcp # local stdio transport
arccrew serve # also serves /mcp for remote HTTP transport
Remote connection (after deploying your project):
{
"mcpServers": {
"my-project": {
"url": "https://your-deployment-url/mcp"
}
}
}
Utility helpers
from arccrew.utils.helpers import truncate_text, extract_json_safe, slugify
# Truncate long LLM output before storing
short = truncate_text(long_response, max_chars=4000)
# Safely extract JSON from any LLM response
data = extract_json_safe(response, fallback={})
# Generate URL-safe slugs
slug = slugify("My Agent Result — 2026") # "my-agent-result-2026"
Example
See examples/researcher_writer/ for a complete working pipeline with two agents (Researcher + Writer) in both manual graph and supervisor patterns.
# Manual graph (BaseAgent subclasses)
python -m examples.researcher_writer.pipeline
# Supervisor pattern
python -m examples.researcher_writer.pipeline --supervisor
License
ArcCrew is licensed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file arccrew-0.6.1.tar.gz.
File metadata
- Download URL: arccrew-0.6.1.tar.gz
- Upload date:
- Size: 138.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
edce656b2117b8bbb00a3be22453fb207002211898729abb7ddedc1696d6ddb9
|
|
| MD5 |
321d2870ecfdfd7b6f37b56cd5597a18
|
|
| BLAKE2b-256 |
920930b3e176efd7aea8fe90e5d3dd11fc0a82c29a03e99f401bb65348a480cb
|
Provenance
The following attestation bundles were made for arccrew-0.6.1.tar.gz:
Publisher:
publish.yml on amonrreal/arccrew
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
arccrew-0.6.1.tar.gz -
Subject digest:
edce656b2117b8bbb00a3be22453fb207002211898729abb7ddedc1696d6ddb9 - Sigstore transparency entry: 1261497631
- Sigstore integration time:
-
Permalink:
amonrreal/arccrew@90fd4aa434ad42e4a1256179099ab6e893138147 -
Branch / Tag:
refs/tags/v0.6.1 - Owner: https://github.com/amonrreal
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@90fd4aa434ad42e4a1256179099ab6e893138147 -
Trigger Event:
push
-
Statement type:
File details
Details for the file arccrew-0.6.1-py3-none-any.whl.
File metadata
- Download URL: arccrew-0.6.1-py3-none-any.whl
- Upload date:
- Size: 147.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
db1c67c3e92739c778008e03b6bbc3d18154835f5f2626f33f17df0d2df7c8b1
|
|
| MD5 |
2be8ea0756e2a38049e75501b77dd9a8
|
|
| BLAKE2b-256 |
40db610812853fb4d6950612d42d6044b508c830e5fe92e04084e69b9f1ca856
|
Provenance
The following attestation bundles were made for arccrew-0.6.1-py3-none-any.whl:
Publisher:
publish.yml on amonrreal/arccrew
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
arccrew-0.6.1-py3-none-any.whl -
Subject digest:
db1c67c3e92739c778008e03b6bbc3d18154835f5f2626f33f17df0d2df7c8b1 - Sigstore transparency entry: 1261497700
- Sigstore integration time:
-
Permalink:
amonrreal/arccrew@90fd4aa434ad42e4a1256179099ab6e893138147 -
Branch / Tag:
refs/tags/v0.6.1 - Owner: https://github.com/amonrreal
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@90fd4aa434ad42e4a1256179099ab6e893138147 -
Trigger Event:
push
-
Statement type: