Skip to main content

Framework for building multi-agent LangGraph pipelines with Claude Code skills

Project description

ArcCrew

PyPI version Python License: MIT

Multi-agent LangGraph pipelines — scaffold, build, and ship in minutes.

ArcCrew is a Python framework for building production-ready multi-agent pipelines on LangGraph. It ships with a CLI, a 3-layer prompt system, a FastAPI server, an MCP server, and AI coding skills that let you generate entire pipelines from a description.


Install

pip install arccrew

Quick start

# 1. Scaffold a new project
arccrew init my-project
cd my-project

# 2. Set your API key
cp .env.example .env
# edit .env → set ANTHROPIC_API_KEY

# 3. Verify everything is configured
arccrew check

# 4. Open in Claude Code and run:
# /build-agents  ← describe your pipeline, get all files generated

Or build manually:

# agents/researcher.py
from arccrew import BaseAgent, track_timing
from arccrew.tools import get_research_tools
from langgraph.types import Command
from pathlib import Path

class ResearcherAgent(BaseAgent):
    def __init__(self):
        super().__init__(name="researcher", prompts_dir=Path("prompts"))

    @property
    def system_prompt(self) -> str:
        return self.get_prompt_manager().assemble_prompt("researcher")

    @track_timing
    async def execute(self, state: dict) -> Command:
        task = state["tasks"][state["current_task_index"]]["description"]
        result = await self.run_react(task=task, tools=get_research_tools())
        return Command(goto="writer", update={"context": self.extract_json(result)})
# pipeline.py
from arccrew import create_pipeline, PipelineState
from arccrew.api.deps import pipeline_registry
from arccrew.mcp_server import register_pipeline
from agents.researcher import ResearcherAgent

def create_my_pipeline():
    researcher = ResearcherAgent()
    return create_pipeline(
        state_class=PipelineState,
        nodes={"researcher": lambda s: researcher.execute(s)},
        flow=["researcher"],
    )

pipeline_registry.register("my_pipeline", create_my_pipeline)
register_pipeline("my_pipeline", create_my_pipeline)
arccrew serve       # REST API on :8000
arccrew serve-mcp   # MCP server for Claude Desktop and other MCP clients

Once the server is running, open http://localhost:8000/playground to explore your pipelines interactively, or call them via API:

# Simple text input
curl -X POST http://localhost:8000/api/runs \
  -H "Content-Type: application/json" \
  -d '{"pipeline": "my_pipeline", "input": "write a blog post about AI"}'

# Structured input (dict — auto-serialized, keys available as metadata)
curl -X POST http://localhost:8000/api/runs \
  -H "Content-Type: application/json" \
  -d '{"pipeline": "my_pipeline", "input": {"topic": "AI", "tone": "casual", "words": 400}}'

# Poll for results
curl http://localhost:8000/api/runs/{run_id}

# Stream events as the pipeline runs (SSE)
curl -N http://localhost:8000/api/runs/{run_id}/stream

Features

  • 3-layer prompt system — library base + your project globals + per-agent role
  • AI coding skills — pre-installed in every scaffolded project, work natively in Claude Code
  • Built-in tools — web search, file management, shell execution, MCP server adapters, ready to plug into any agent
  • FastAPI server — non-blocking REST API (POST /api/runs → 202 + run_id, poll with GET /api/runs/{id}), SSE streaming, WebSocket, interactive playground at /playground
  • MCP dual role — expose your pipelines as MCP tools AND connect external MCP servers (GitHub, Notion, Slack…) as agent tools
  • Multi-provider — Claude by default, any LangChain-supported provider via env var
  • Supervisor pattern — LLM-driven routing as an alternative to manual graph wiring
  • Retry / verification loops — built-in worker → verifier → retry pattern

Prompt layers

Every agent's system prompt is assembled in this order:

Layer File Who controls it
Base bundled in arccrew library — universal agent rules
Global prompts/global.md you — project-wide rules (tone, domain, language)
Agent prompts/{agent}.md you — role, tools, output schema

Built-in tools

Every agent has access to these tools out of the box:

from arccrew.tools import get_research_tools, create_workspace_tools, get_mcp_tools

# Research tools — web_search (DuckDuckGo)
tools = get_research_tools()

# Workspace tools — write_file, read_file, list_files, run_shell
from pathlib import Path
tools = create_workspace_tools(Path("workspace"))

# MCP tools — connect any MCP server (GitHub, Notion, Slack, custom…)
# Requires: pip install "arccrew[mcp]"
import os
tools = await get_mcp_tools({
    "github": {
        "transport": "sse",
        "url": "https://api.githubcopilot.com/mcp/",
        "headers": {"Authorization": f"Bearer {os.getenv('GITHUB_TOKEN')}"},
    }
})

Combine freely in any agent:

result = await self.run_react(
    task=task,
    tools=(
        get_research_tools()
        + create_workspace_tools(Path("workspace"))
        + await get_mcp_tools({"github": GITHUB_MCP})
    )
)

Adding your own tools

Create a file in tools/ named after your domain:

# tools/calendar_tools.py
from langchain_core.tools import tool

@tool
async def get_availability(date: str) -> str:
    """Check calendar availability for a given date (YYYY-MM-DD).

    Use for: tasks that require checking free/busy slots.
    Do NOT use for: general research (use web_search instead).

    Args:
        date: Date to check in YYYY-MM-DD format.

    Returns:
        Available time slots as a formatted string.
    """
    try:
        # your implementation here
        return f"Available slots for {date}: 9am, 2pm, 4pm"
    except Exception as e:
        return f"ERROR: {e}"

def get_calendar_tools() -> list:
    return [get_availability]

Combine with arccrew built-ins in any agent:

from arccrew.tools import get_research_tools
from tools.calendar_tools import get_calendar_tools

result = await self.run_react(
    task=task,
    tools=get_research_tools() + get_calendar_tools(),
)

Use /add-tool in Claude Code to generate a new tool from a description.


Environment variables

All configuration lives in .env. Copy .env.example after scaffolding:

# LLM provider (pick one)
ANTHROPIC_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here
GROQ_API_KEY=your_key_here
GOOGLE_API_KEY=your_key_here

# Model (default for all agents)
AGENT_MODEL=anthropic/claude-haiku-4-5-20251001

# Per-agent overrides
RESEARCHER_MODEL=anthropic/claude-sonnet-4-6
RESEARCHER_MAX_ROUNDS=10
WRITER_MAX_ROUNDS=5

# Workspace (where agents write files)
WORKSPACE_DIR=workspace

# API server
API_HOST=0.0.0.0
API_PORT=8000
API_AUTH_ENABLED=false   # set true + API_SECRET_KEY for production

# Observability (optional)
LANGSMITH_API_KEY=your_key_here
LANGSMITH_PROJECT=my-project
LANGSMITH_TRACING=true

Skills

Every project created with arccrew init gets skills pre-installed in .claude/commands/ and detailed references in skills/. They work natively in Claude Code as slash commands.

After upgrading arccrew, run arccrew sync-skills to get new and updated skills without touching your project files.

Core

Skill What it does
/build-agents Generate a full pipeline from a description
/add-agent Add a single agent to an existing pipeline
/add-tool Add a tool to an agent
/add-state-field Add a custom state field with the right reducer
/add-prompt Add or update an agent prompt

Patterns

Skill What it does
/add-retry-loop Add retry + verification loop
/add-review-gate Add a human-in-the-loop review gate
/add-supervisor Add LLM-driven supervisor orchestration

Infrastructure

Skill What it does
/add-api-endpoint Add a REST endpoint to the FastAPI server
/add-mcp-pipeline Register a pipeline as an MCP tool

Configuration

Skill What it does
/configure-claude Configure Claude as LLM provider
/configure-openai Configure OpenAI as LLM provider
/configure-gemini Configure Gemini as LLM provider
/switch-provider Switch between LLM providers
/configure-mcp Connect external MCP servers as agent tools

Quality

Skill What it does
/enable-langsmith Set up LangSmith tracing
/enable-otel Set up OpenTelemetry (Grafana, Datadog, Jaeger…)
/debug-pipeline Diagnose pipeline errors
/write-tests Generate tests for agents and tools

CLI

arccrew init <name>                    # scaffold a new project
arccrew check                          # verify config and dependencies
arccrew sync-skills                    # update skills after upgrading
arccrew run "describe your task"       # run a pipeline from the terminal
arccrew run "task" --pipeline my_name  # run a specific pipeline
arccrew visualize                      # print Mermaid diagram of your graph
arccrew visualize -o graph.png         # save diagram as PNG (requires pyppeteer)
arccrew visualize -o graph.mmd         # save raw Mermaid code
arccrew serve                          # start FastAPI server
arccrew serve-mcp                      # start MCP server (stdio)

Local development

To work on arccrew itself and test changes immediately:

git clone https://github.com/amonrreal/arccrew
cd arccrew

# Install in editable mode — changes take effect without reinstalling
pip install -e ".[dev]"
cp .env.example .env
# Set ANTHROPIC_API_KEY (or another provider) in .env

# Verify the CLI uses your local version
arccrew --help

# Run the built-in example
python -m examples.researcher_writer.pipeline
python -m examples.researcher_writer.pipeline --supervisor

# Scaffold a test project and try the new commands
cd /tmp
arccrew init test-project
cd test-project
cp .env.example .env
# edit .env → set API key

arccrew check
# → now requires a pipeline.py in the current directory ✓

# Run tests
cd /path/to/arccrew
pytest
pytest tests/test_base_agent.py -v
pytest -k "test_name"

MCP server

Expose your pipelines as tools in any MCP-compatible client (Claude Desktop, Claude Code):

arccrew serve-mcp   # local stdio transport
arccrew serve       # also serves /mcp for remote HTTP transport

Remote connection (after deploying your project):

{
  "mcpServers": {
    "my-project": {
      "url": "https://your-deployment-url/mcp"
    }
  }
}

Utility helpers

from arccrew.utils.helpers import truncate_text, extract_json_safe, slugify

# Truncate long LLM output before storing
short = truncate_text(long_response, max_chars=4000)

# Safely extract JSON from any LLM response
data = extract_json_safe(response, fallback={})

# Generate URL-safe slugs
slug = slugify("My Agent Result — 2026")  # "my-agent-result-2026"

Example

See examples/researcher_writer/ for a complete working pipeline with two agents (Researcher + Writer) in both manual graph and supervisor patterns.

# Manual graph (BaseAgent subclasses)
python -m examples.researcher_writer.pipeline

# Supervisor pattern
python -m examples.researcher_writer.pipeline --supervisor

License

ArcCrew is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arccrew-0.6.0.tar.gz (137.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

arccrew-0.6.0-py3-none-any.whl (147.1 kB view details)

Uploaded Python 3

File details

Details for the file arccrew-0.6.0.tar.gz.

File metadata

  • Download URL: arccrew-0.6.0.tar.gz
  • Upload date:
  • Size: 137.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for arccrew-0.6.0.tar.gz
Algorithm Hash digest
SHA256 3e69540dddf950d7dbf2736d3f32f54f559deec87a8afb29285d12f9d3e53969
MD5 5f3ff1fd8ad186c5f7a08a429351f5ad
BLAKE2b-256 70441058a3a1c538a5f6c3a7fd5c0616fe1d07a52ccb529a7771d4f392b1705f

See more details on using hashes here.

Provenance

The following attestation bundles were made for arccrew-0.6.0.tar.gz:

Publisher: publish.yml on amonrreal/arccrew

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file arccrew-0.6.0-py3-none-any.whl.

File metadata

  • Download URL: arccrew-0.6.0-py3-none-any.whl
  • Upload date:
  • Size: 147.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for arccrew-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 265a98c456806ce4d8bd8d32c28c24c20eea5257a6a812c19200978dab5f63f2
MD5 a177931ab9bff306d722400583a75d0b
BLAKE2b-256 41651190cd2a7932414799c915063df612d39074595ecea3cf885e37c7b7f5a7

See more details on using hashes here.

Provenance

The following attestation bundles were made for arccrew-0.6.0-py3-none-any.whl:

Publisher: publish.yml on amonrreal/arccrew

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page