Skip to main content

Framework for building multi-agent LangGraph pipelines with Claude Code skills

Project description

ArcCrew

PyPI version Python License: MIT

Multi-agent LangGraph pipelines — scaffold, build, and ship in minutes.

ArcCrew is a Python framework for building production-ready multi-agent pipelines on LangGraph. It ships with a CLI, a 3-layer prompt system, a FastAPI server, an MCP server, and AI coding skills that let you generate entire pipelines from a description.


Install

pip install arccrew

Quick start

# 1. Scaffold a new project
arccrew init my-project
cd my-project

# 2. Set your API key
cp .env.example .env
# edit .env → set ANTHROPIC_API_KEY

# 3. Verify everything is configured
arccrew check

# 4. Open in Claude Code and run:
# /build-agents  ← describe your pipeline, get all files generated

Or build manually:

# agents/researcher.py
from arccrew import BaseAgent, track_timing
from arccrew.tools import get_research_tools
from langgraph.types import Command
from pathlib import Path

class ResearcherAgent(BaseAgent):
    def __init__(self):
        super().__init__(name="researcher", prompts_dir=Path("prompts"))

    @property
    def system_prompt(self) -> str:
        return self.get_prompt_manager().assemble_prompt("researcher")

    @track_timing
    async def execute(self, state: dict) -> Command:
        task = state["tasks"][state["current_task_index"]]["description"]
        result = await self.run_react(task=task, tools=get_research_tools())
        return Command(goto="writer", update={"context": self.extract_json(result)})
# pipeline.py
from arccrew import create_pipeline, PipelineState
from arccrew.api.deps import pipeline_registry
from arccrew.mcp_server import register_pipeline
from agents.researcher import ResearcherAgent

def create_my_pipeline():
    researcher = ResearcherAgent()
    return create_pipeline(
        state_class=PipelineState,
        nodes={"researcher": lambda s: researcher.execute(s)},
        flow=["researcher"],
    )

pipeline_registry.register("my_pipeline", create_my_pipeline)
register_pipeline("my_pipeline", create_my_pipeline)
arccrew serve       # REST API on :8000
arccrew serve-mcp   # MCP server for Claude Desktop and other MCP clients

Features

  • 3-layer prompt system — library base + your project globals + per-agent role
  • 18 AI coding skills — pre-installed in every scaffolded project, work natively in Claude Code
  • Built-in tools — web search, file management, shell execution, ready to plug into any agent
  • FastAPI server — REST + SSE streaming + WebSocket out of the box
  • MCP server — expose your pipelines as tools in any MCP-compatible client
  • Multi-provider — Claude by default, any LangChain-supported provider via env var
  • Supervisor pattern — LLM-driven routing as an alternative to manual graph wiring
  • Retry / verification loops — built-in worker → verifier → retry pattern

Prompt layers

Every agent's system prompt is assembled in this order:

Layer File Who controls it
Base bundled in arccrew library — universal agent rules
Global prompts/global.md you — project-wide rules (tone, domain, language)
Agent prompts/{agent}.md you — role, tools, output schema

Built-in tools

Every agent has access to these tools out of the box:

from arccrew.tools import get_research_tools, create_workspace_tools

# Research tools — for agents that need to look things up
tools = get_research_tools()
# Includes: web_search (DuckDuckGo)

# Workspace tools — for agents that read/write files or run commands
from pathlib import Path
tools = create_workspace_tools(Path("workspace"))
# Includes: write_file, read_file, list_files, run_shell

Use them in any agent:

result = await self.run_react(task=task, tools=get_research_tools())
result = await self.run_react(task=task, tools=create_workspace_tools(Path("workspace")))

# Combine both
result = await self.run_react(
    task=task,
    tools=get_research_tools() + create_workspace_tools(Path("workspace"))
)

Adding your own tools

Create a file in tools/ named after your domain:

# tools/calendar_tools.py
from langchain_core.tools import tool

@tool
async def get_availability(date: str) -> str:
    """Check calendar availability for a given date (YYYY-MM-DD).

    Use for: tasks that require checking free/busy slots.
    Do NOT use for: general research (use web_search instead).

    Args:
        date: Date to check in YYYY-MM-DD format.

    Returns:
        Available time slots as a formatted string.
    """
    try:
        # your implementation here
        return f"Available slots for {date}: 9am, 2pm, 4pm"
    except Exception as e:
        return f"ERROR: {e}"

def get_calendar_tools() -> list:
    return [get_availability]

Combine with arccrew built-ins in any agent:

from arccrew.tools import get_research_tools
from tools.calendar_tools import get_calendar_tools

result = await self.run_react(
    task=task,
    tools=get_research_tools() + get_calendar_tools(),
)

Use /add-tool in Claude Code to generate a new tool from a description.


Environment variables

All configuration lives in .env. Copy .env.example after scaffolding:

# LLM provider (pick one)
ANTHROPIC_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here
GROQ_API_KEY=your_key_here
GOOGLE_API_KEY=your_key_here

# Model (default for all agents)
AGENT_MODEL=anthropic/claude-haiku-4-5-20251001

# Per-agent overrides
RESEARCHER_MODEL=anthropic/claude-sonnet-4-6
RESEARCHER_MAX_ROUNDS=10
WRITER_MAX_ROUNDS=5

# Workspace (where agents write files)
WORKSPACE_DIR=workspace

# API server
API_HOST=0.0.0.0
API_PORT=8000
API_AUTH_ENABLED=false   # set true + API_SECRET_KEY for production

# Observability (optional)
LANGSMITH_API_KEY=your_key_here
LANGSMITH_PROJECT=my-project
LANGSMITH_TRACING=true

Skills

Every project created with arccrew init gets 18 skills pre-installed in .claude/commands/ and detailed references in skills/. They work natively in Claude Code as slash commands.

After upgrading arccrew, run arccrew sync-skills to get new and updated skills without touching your project files.

Core

Skill What it does
/build-agents Generate a full pipeline from a description
/add-agent Add a single agent to an existing pipeline
/add-tool Add a tool to an agent
/add-state-field Add a custom state field with the right reducer
/add-prompt Add or update an agent prompt

Patterns

Skill What it does
/add-retry-loop Add retry + verification loop
/add-review-gate Add a human-in-the-loop review gate
/add-supervisor Add LLM-driven supervisor orchestration

Infrastructure

Skill What it does
/add-api-endpoint Add a REST endpoint to the FastAPI server
/add-mcp-pipeline Register a pipeline as an MCP tool

Configuration

Skill What it does
/configure-claude Configure Claude as LLM provider
/configure-openai Configure OpenAI as LLM provider
/configure-gemini Configure Gemini as LLM provider
/switch-provider Switch between LLM providers

Quality

Skill What it does
/enable-langsmith Set up LangSmith tracing
/enable-otel Set up OpenTelemetry (Grafana, Datadog, Jaeger…)
/debug-pipeline Diagnose pipeline errors
/write-tests Generate tests for agents and tools

CLI

arccrew init <name>      # scaffold a new project
arccrew check            # verify config and dependencies
arccrew sync-skills      # update skills after upgrading
arccrew serve            # start FastAPI server
arccrew serve-mcp        # start MCP server (stdio)

MCP server

Expose your pipelines as tools in any MCP-compatible client (Claude Desktop, Claude Code):

arccrew serve-mcp   # local stdio transport
arccrew serve       # also serves /mcp for remote HTTP transport

Remote connection (after deploying your project):

{
  "mcpServers": {
    "my-project": {
      "url": "https://your-deployment-url/mcp"
    }
  }
}

Utility helpers

from arccrew.utils.helpers import truncate_text, extract_json_safe, slugify

# Truncate long LLM output before storing
short = truncate_text(long_response, max_chars=4000)

# Safely extract JSON from any LLM response
data = extract_json_safe(response, fallback={})

# Generate URL-safe slugs
slug = slugify("My Agent Result — 2026")  # "my-agent-result-2026"

Example

See examples/researcher_writer/ for a complete working pipeline with two agents (Researcher + Writer) in both manual graph and supervisor patterns.

# Manual graph (BaseAgent subclasses)
python -m examples.researcher_writer.pipeline

# Supervisor pattern
python -m examples.researcher_writer.pipeline --supervisor

License

ArcCrew is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arccrew-0.1.1.tar.gz (111.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

arccrew-0.1.1-py3-none-any.whl (121.9 kB view details)

Uploaded Python 3

File details

Details for the file arccrew-0.1.1.tar.gz.

File metadata

  • Download URL: arccrew-0.1.1.tar.gz
  • Upload date:
  • Size: 111.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for arccrew-0.1.1.tar.gz
Algorithm Hash digest
SHA256 2820f683747642c65f9b303e48696e4ed37acc76f151512565582e24d2482d7f
MD5 4c6bd7ac5125e251ce3266e113e7ae7d
BLAKE2b-256 284b01359d11c84f5c9bd83a7f7c8de3b7cafb515f113f236983c100ed917f70

See more details on using hashes here.

Provenance

The following attestation bundles were made for arccrew-0.1.1.tar.gz:

Publisher: publish.yml on amonrreal/arccrew

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file arccrew-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: arccrew-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 121.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for arccrew-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4205fe27646ff9c63829cda8b7162126938c25e9533bce8b0aaa5986e1290f43
MD5 8c7423ac32e972efb5ff040dcd4203e3
BLAKE2b-256 d1a9e9f78163f52d2cae703c987492da07f7b7b125e5c542501284957e64fecc

See more details on using hashes here.

Provenance

The following attestation bundles were made for arccrew-0.1.1-py3-none-any.whl:

Publisher: publish.yml on amonrreal/arccrew

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page