Skip to main content

Lean, full-stack agentic AI framework with PTRL (Prompt-Time Reinforcement Learning)

Project description

๐Ÿฆž ClawAgents

A lean, full-stack agentic AI framework โ€” ~2,500 LOC

Version Python License LOC


ClawAgents is a production-ready agentic framework that gives LLMs the ability to read, write, and execute code โ€” with built-in planning, memory, sandboxing, and a gateway server. It supports OpenAI GPT-5, Google Gemini, and Anthropic Claude out of the box, with a pluggable provider architecture for any LLM.

Built by extracting and unifying the best architectural patterns from OpenClaw (~5,800 files) and DeepAgents (~1,400 LOC core), ClawAgents delivers the same power at a fraction of the complexity.

Installation

pip install clawagents              # Core (OpenAI only)
pip install clawagents[gemini]      # + Google Gemini support
pip install clawagents[anthropic]   # + Anthropic Claude support
pip install clawagents[all]         # All providers + tiktoken

Version 6.2.1 โ€” Latest stable release (April 2026). Hardens web_fetch against redirect-based SSRF, fixes local pytest source resolution, and adds parity smoke coverage for the TypeScript sibling. See Changelog.


30-Second Quick Start

The fastest way to get going โ€” scaffolds a .env, a run_agent.py starter script, and an AGENTS.md memory file:

pip install clawagents
cd ~/my-project         # any project directory
clawagents --init       # creates .env, run_agent.py, AGENTS.md

Then edit .env with your API key and run:

python run_agent.py

That's it. The generated run_agent.py includes commented-out examples for every provider (OpenAI, Gemini, Azure, Ollama, vLLM).

Where does .env go?

ClawAgents loads .env from the directory you run the command from (your current working directory). Different projects can have different configurations.

~/my-project/
โ”œโ”€โ”€ .env              โ† ClawAgents reads this when you run from ~/my-project/
โ”œโ”€โ”€ run_agent.py
โ”œโ”€โ”€ AGENTS.md
โ””โ”€โ”€ src/

Four ways to configure (in priority order):

  1. create_claw_agent() parameters โ€” highest priority, overrides everything
  2. Shell environment variables โ€” export OPENAI_API_KEY=sk-... in ~/.zshrc (works globally)
  3. CLAWAGENTS_ENV_FILE โ€” set this env var to point to an explicit .env file path (useful for CI/Docker/multi-project)
  4. .env file โ€” project-level config, loaded from cwd/.env or cwd/../.env

A ready-to-use template is included in the repo:

cp .env.example .env   # then fill in your API key

Or run clawagents --init to generate one interactively.

CLI One-Liner

clawagents --task "List all Python files and summarize the project"

Minimal Python Code

import asyncio
from clawagents import create_claw_agent

async def main():
    agent = create_claw_agent("gpt-5-mini")  # or "gemini-3-flash", "llama3.1", etc.
    result = await agent.invoke("List all Python files in src/")
    print(result.result)

asyncio.run(main())

Examples

See the examples/ directory for ready-to-run scripts:

File Provider
01_openai.py OpenAI (GPT-5, GPT-4o)
02_gemini.py Google Gemini
03_azure.py Azure OpenAI
04_local_ollama.py Ollama (local)
05_local_vllm.py vLLM (local)
06_bedrock.py AWS Bedrock (via gateway)
07_with_custom_tools.py Custom tools
08_compare_samples.py Multi-sample comparison

Configuration

1. Configure your environment

Create a .env file (or run clawagents --init to generate one):

PROVIDER=gemini                    # or "openai"
GEMINI_API_KEY=AIza...             # Your Gemini API key
GEMINI_MODEL=gemini-3-flash-preview
STREAMING=1
CONTEXT_WINDOW=1000000
MAX_TOKENS=8192
TEMPERATURE=0                      # Model-specific overrides apply (see below)

# Optional: RL-inspired agent improvements
CLAW_TRAJECTORY=1                  # Enable trajectory logging + scoring
CLAW_RETHINK=1                     # Enable consecutive-failure detection
CLAW_LEARN=1                       # Enable PTRL (lessons from past runs)
OpenAI configuration
PROVIDER=openai
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-5-nano
STREAMING=1
CONTEXT_WINDOW=1000000
MAX_TOKENS=8192
TEMPERATURE=0                      # 0 for deterministic output
CLAW_TRAJECTORY=1
CLAW_RETHINK=1
CLAW_LEARN=1

2. One-line agent

from clawagents import create_claw_agent

agent = create_claw_agent("gemini-3-flash")
result = await agent.invoke("List all Python files in src/")
print(result.result)

3. With custom instructions

agent = create_claw_agent(
    "gpt-5",
    instruction="You are a senior code reviewer. Be thorough and concise."
)
result = await agent.invoke("Review this codebase and suggest improvements")

4. With trajectory logging & rethink

agent = create_claw_agent(
    "gpt-5-mini",
    trajectory=True,   # logs every turn + scores the run
    rethink=True,       # auto-injects "rethink" after 3 consecutive failures
)
result = await agent.invoke("Refactor the auth module and add tests")
# Run summary written to .clawagents/trajectories/runs.jsonl

5. With PTRL (Prompt-Time Reinforcement Learning)

agent = create_claw_agent(
    "gpt-5-mini",
    learn=True,    # enables all 3 PTRL layers (implies trajectory=True)
    rethink=True,  # enhanced rethink uses past lessons
)
result = await agent.invoke("Build the data pipeline")
# After the run: lessons extracted and saved to .clawagents/lessons.md
# Next run: lessons injected into system prompt automatically

6. With Advisor Model (smart model guides cheap model)

# GPT-5.4-nano executes, GPT-5.4 advises 2-3 times per task
agent = create_claw_agent(
    "gpt-5.4-nano",
    advisor_model="gpt-5.4",
)

# Cross-provider: Haiku executes, GPT-5.4 advises
agent = create_claw_agent(
    "claude-haiku-4-5",
    advisor_model="gpt-5.4",
    advisor_api_key="sk-...",
)

The advisor is consulted at three points: (1) after initial orientation, before committing to an approach, (2) when stuck (consecutive failures trigger rethink), and (3) before declaring the task complete. Set ADVISOR_MODEL in .env or pass advisor_model in code.

7. Multi-Sample Comparison (GRPO-inspired)

agent = create_claw_agent("gpt-5-mini", rethink=True)
# Run the task 3 times, pick the best based on objective scoring
result = await agent.compare("Fix the bug in app.py", n_samples=3)
print(result["best_result"])   # best answer
print(result["best_score"])    # objective score
print(result["all_scores"])    # all samples with scores

8. Azure OpenAI

agent = create_claw_agent(
    "gpt-4o",                    # your Azure deployment name
    api_key="your-azure-key",
    base_url="https://myresource.openai.azure.com/",
    api_version="2024-12-01-preview",
    learn=True,
)
result = await agent.invoke("Analyze the codebase")

Or via .env:

PROVIDER=openai
OPENAI_API_KEY=your-azure-key
OPENAI_MODEL=gpt-4o
OPENAI_BASE_URL=https://myresource.openai.azure.com/
OPENAI_API_VERSION=2024-12-01-preview

9. AWS Bedrock (via OpenAI-compatible gateway)

Use Bedrock Access Gateway or LiteLLM proxy to expose Bedrock models as an OpenAI-compatible API:

agent = create_claw_agent(
    "anthropic.claude-3-sonnet-20240229-v1:0",
    base_url="http://localhost:8080/v1",
    api_key="bedrock",           # gateway handles AWS auth
)

Or via .env:

OPENAI_API_KEY=bedrock
OPENAI_MODEL=anthropic.claude-3-sonnet-20240229-v1:0
OPENAI_BASE_URL=http://localhost:8080/v1

10. Local Models (Ollama / vLLM / LM Studio)

Any OpenAI-compatible local server works out of the box:

# Ollama (default port 11434)
agent = create_claw_agent("llama3.1", base_url="http://localhost:11434/v1")

# vLLM
agent = create_claw_agent("Qwen/Qwen3-8B", base_url="http://localhost:8000/v1")

# LM Studio
agent = create_claw_agent("local-model", base_url="http://localhost:1234/v1")

Or via .env:

# No API key needed for local models โ€” just omit OPENAI_API_KEY
OPENAI_MODEL=llama3.1
OPENAI_BASE_URL=http://localhost:11434/v1

Tip: For local models that emit <think>...</think> tokens (Qwen3, DeepSeek), thinking content is automatically detected, stripped from output, and preserved in trajectory records (Feature H).

11. CLI

# Scaffold a project (generates .env, run_agent.py, AGENTS.md)
clawagents --init

# Check your configuration
clawagents --doctor

# Run a task directly
clawagents --task "Find all TODO comments in the codebase"

# Inspect past run trajectories
clawagents --trajectory        # last run
clawagents --trajectory 5      # last 5 runs

# Start the gateway server
clawagents --serve --port 3000

# Show all options
clawagents --help

Typical First-Time Flow

pip install clawagents           # 1. Install
clawagents --init                # 2. Scaffold .env, run_agent.py, AGENTS.md
# edit .env with your API key    # 3. Configure
clawagents --doctor              # 4. Verify setup
clawagents --task "hello world"  # 5. Run your first task
python run_agent.py              # 6. Or use the generated script

CLI Reference

Command Description
clawagents --init Scaffold a starter project: .env (config template), run_agent.py (starter script with 5 provider options), AGENTS.md (memory file). Skips existing files.
clawagents --doctor Check configuration health: .env discovery, API keys, active model, LLM settings, PTRL flags, local endpoint reachability, trajectory history, AGENTS.md presence.
clawagents --task "..." Run a single task. Prints a startup banner (provider=X model=Y env=Z ptrl=...), executes the agent, prints the result to stdout.
clawagents --trajectory [N] Inspect the last N run summaries (default: 1). Shows run ID, model, task, duration, turns, tool calls, score, quality, failure breakdown, verified score, and judge verdict. Requires CLAW_TRAJECTORY=1.
clawagents --serve [--port N] Start the HTTP gateway server (default port 3000). Endpoints: POST /chat, POST /chat/stream (SSE), GET /queue, GET /health.
clawagents --sessions List saved sessions (requires CLAW_FEATURE_SESSION_PERSISTENCE=1). Shows session ID, turn count, status, and task.
clawagents --resume [ID|latest] Resume a saved session. Loads messages from JSONL and continues the conversation. Defaults to latest.
clawagents --help Show all options with examples.
clawagents --advisor MODEL Pair a stronger model for strategic guidance (e.g. --advisor gpt-5.4).

๐Ÿ† Performance: ClawAgents vs Traditional Frameworks

ClawAgents v5.10 outperforms traditional multi-layer agentic frameworks through architectural simplicity. Here's how it stacks up against DeepAgents (LangGraph/LangChain-based) in head-to-head benchmarks.

Benchmark Results (February 2026)

TypeScript โ€” 5 tasks ร— 2 models ร— 2 frameworks (20/20 โœ…)

Framework Gemini-2.5-flash GPT-5-mini
ClawAgents v5.5 2.3s avg ยท 1.4 tools 13.6s avg ยท 1.4 tools
DeepAgents 2.5s avg ยท 1.8 tools 15.7s avg ยท 2.4 tools

Per-Task Breakdown

Task ClawAgents (Gemini) DeepAgents (Gemini) ClawAgents (GPT-5) DeepAgents (GPT-5)
File Listing 3.7s, 1 tool 1.9s, 1 tool 8.9s, 1 tool 8.4s, 1 tool
Read & Analyze 1.6s, 1 tool 3.6s, 3 tools 5.4s, 1 tool 13.0s, 2 tools
Write File 2.1s, 2 tools 2.6s, 2 tools 5.2s, 2 tools 7.5s, 2 tools
Multi-Step 3.4s, 3 tools 3.7s, 3 tools 46.2s, 3 tools 46.9s, 7 tools
Reasoning 0.7s, 0 tools 0.9s, 0 tools 2.3s, 0 tools 2.8s, 0 tools

Python โ€” 18/20 completed (DeepAgents hung on GPT-5 multi_step)

Task ClawAgents (Gemini) DeepAgents (Gemini) ClawAgents (GPT-5) DeepAgents (GPT-5)
File Listing 2.8s, 1 tool 1.0s, 0 tools* 9.9s, 1 tool 3.4s, 1 tool
Read & Analyze 2.0s, 1 tool 9.8s, 4 tools 5.5s, 1 tool 8.4s, 3 tools
Write File 2.0s, 2 tools 1.0s, 0 tools* 5.0s, 2 tools 9.3s, 3 tools
Multi-Step 4.1s, 3 tools 0.9s, 0 tools* 16.0s, 3 tools โŒ hung >5min
Reasoning 0.7s, 0 tools 1.0s, 0 tools โ€” โ€”

* DeepAgents 0-tool results mean the model answered without using filesystem tools โ€” faster but lower-quality (unverified answers). ClawAgents consistently uses tools to verify answers.

Why ClawAgents Wins

Traditional Stack (DeepAgents):           ClawAgents:
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”               โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Your Code              โ”‚               โ”‚  Your Code       โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค               โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚  LangGraph              โ”‚               โ”‚  ClawAgents      โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค               โ”‚  (direct SDK)    โ”‚
โ”‚  LangChain              โ”‚               โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค                        โ”‚
โ”‚  ChatOpenAI / ChatGeminiโ”‚                        โ–ผ
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค               โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Responses API          โ”‚               โ”‚  Responses API   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜               โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
        4 layers                                1 layer
Advantage Impact
Direct SDK calls (1 layer vs 4) Lower latency, fewer failure points
Working directory awareness Tools operate from CWD; DeepAgents has no CWD concept
Soft + hard loop detection Catches repetitive tool calls at 3 repeats, hard-stops at 6
Efficiency rules in system prompt ~30% reduction in redundant tool calls
Fewer tool calls overall 1.4 avg vs 1.8โ€“2.4 (20โ€“40% more efficient)
No OpenAI lock-in Native Gemini + OpenAI support with FallbackProvider chain

Feature Matrix

Feature ClawAgents v6.2 DeepAgents OpenClaw
Core
ReAct loop โœ… โœ… โœ…
Tool loop detection (soft + hard + ping-pong) โœ… โŒ โœ…
Circuit breaker (30 no-progress calls) โœ… โŒ โŒ
Efficiency rules (system prompt) โœ… โŒ โŒ
Adaptive token estimation (tiktoken) โœ… โŒ โŒ
Model-aware context budgeting โœ… โŒ โŒ
Fraction-based summarization triggers โœ… โœ… โŒ
Tools
Pluggable sandbox backend โœ… โœ… โœ…
In-memory VFS (testing) โœ… โŒ โŒ
Cross-provider conformance tests โœ… โœ… โŒ
Lazy tool registry (deferred imports) โœ… โŒ โŒ
Tool result caching (LRU) โœ… โŒ โŒ
JSON Schema param validation + coercion โœ… โŒ โŒ
ComposeTool (deterministic pipelines) โœ… โŒ โŒ
think tool (structured reasoning) โœ… โŒ โŒ
LangChain tool adapter โœ… N/A โŒ
Agents & Orchestration
Sub-agent delegation โœ… โœ… โœ…
Subagent state isolation โœ… โœ… โŒ
Coordinator/swarm mode โœ… โŒ โœ…
Barrier-based request scheduling โœ… โŒ โŒ
Planning / TodoList โœ… โœ… โŒ
Providers & Resilience
Three-tier provider fallback + quarantine โœ… โŒ โŒ
Native + text tool call repair โœ… โœ… โŒ
Streaming with stall detection โœ… โŒ โœ…
Truncated JSON repair + retry โœ… โŒ โŒ
Model-specific temperature override โœ… โŒ โŒ
Gemini 3 thought_signature support โœ… โŒ โŒ
Thinking token preservation (<think>) โœ… โŒ โŒ
Model control token stripping โœ… โŒ โœ…
Memory & Context
Persistent memory (AGENTS.md) โœ… โœ… โœ…
Auto-summarization + history offloading โœ… โœ… โœ…
Pre-compact transcript archival โœ… โŒ โŒ
Atomic file writes (crash-safe) โœ… โŒ โŒ
Session persistence + resume โœ… โŒ โŒ
Session heartbeat + auto-cleanup โœ… โŒ โŒ
Background memory extraction โœ… โŒ โŒ
Security & Hooks
Rich hook result model (block/redirect/inject) โœ… โœ… โœ…
Credential proxy for sandboxed agents โœ… โŒ โœ…
External shell hooks (pre/post tool + LLM) โœ… โŒ โœ…
Declarative permission rules โœ… โŒ โŒ
Tool access control (block/allow) โœ… โŒ โŒ
Human-in-the-loop โœ… โœ… โœ…
Skills
SKILL.md with constraint documents โœ… โœ… โœ…
Skill eligibility gating (OS/bins/env) โœ… โœ… โŒ
RL & Self-Improvement
Prompt-Time RL (PTRL) โ€” learn from past runs โœ… โŒ โŒ
Trajectory logging + run scoring โœ… โŒ โŒ
Consecutive-failure rethink โœ… โŒ โŒ
Adaptive rethink threshold โœ… โŒ โŒ
Deterministic verification (exit codes, tests) โœ… โŒ โŒ
GRPO-inspired multi-sample comparison โœ… โŒ โŒ
Task-type-aware verification โœ… โŒ โŒ
LLM-as-Judge verification โœ… โŒ โŒ
RFT-ready transition export โœ… โŒ โŒ
Infrastructure
Gateway HTTP server + SSE โœ… โŒ โœ…
WebSocket gateway โœ… โŒ โœ…
Multi-channel messaging (Telegram, WhatsApp, Signal) โœ… โŒ โœ…
Per-session message serialization โœ… โŒ โœ…
Error taxonomy + recovery recipes โœ… โŒ โŒ
Prompt cache boundary (Anthropic) โœ… โœ… โŒ
Lane-based command queue โœ… โŒ โœ…

Architecture

Core Components

clawagents/
โ”œโ”€โ”€ agent.py              # ClawAgent class, create_claw_agent factory
โ”œโ”€โ”€ __main__.py            # CLI entrypoint (--init, --doctor, --task, --serve, --trajectory)
โ”œโ”€โ”€ config/
โ”‚   โ”œโ”€โ”€ config.py          # EngineConfig, .env discovery, model resolution
โ”‚   โ””โ”€โ”€ features.py        # 15 feature flags (CLAW_FEATURE_* env vars)
โ”œโ”€โ”€ providers/
โ”‚   โ”œโ”€โ”€ llm.py             # LLMProvider ABC + OpenAI/Gemini/Anthropic implementations
โ”‚   โ””โ”€โ”€ fallback.py        # FallbackProvider โ€” 3-tier failover + quarantine (v6.0)
โ”œโ”€โ”€ tools/
โ”‚   โ”œโ”€โ”€ registry.py        # ToolRegistry, LazyTool, parallel execution, LRU cache (v6.0)
โ”‚   โ”œโ”€โ”€ filesystem.py      # ls, read_file, write_file, edit_file, grep, glob
โ”‚   โ”œโ”€โ”€ advanced_fs.py     # tree, diff, insert_lines
โ”‚   โ”œโ”€โ”€ exec.py            # Shell command execution with dangerous command blocking
โ”‚   โ”œโ”€โ”€ subagent.py        # Sub-agent delegation with state isolation (v6.0)
โ”‚   โ”œโ”€โ”€ skills.py          # SKILL.md loading with constraint documents (v6.0)
โ”‚   โ”œโ”€โ”€ think.py           # Structured reasoning (no side effects)
โ”‚   โ”œโ”€โ”€ web.py             # URL fetching with HTML cleanup
โ”‚   โ”œโ”€โ”€ todolist.py        # write_todos, update_todo
โ”‚   โ”œโ”€โ”€ compose.py         # ComposeTool โ€” deterministic multi-tool pipelines
โ”‚   โ”œโ”€โ”€ interactive.py     # ask_user (stdin-based)
โ”‚   โ”œโ”€โ”€ cache.py           # ResultCacheManager (SHA-256, TTL-based)
โ”‚   โ”œโ”€โ”€ validate.py        # JSON Schema param validation + lenient coercion
โ”‚   โ””โ”€โ”€ permissions.py     # Declarative permission rules (glob-based)
โ”œโ”€โ”€ graph/
โ”‚   โ”œโ”€โ”€ agent_loop.py      # Core ReAct loop, HookResult, context management (v6.0)
โ”‚   โ”œโ”€โ”€ coordinator.py     # Coordinator/swarm orchestration mode
โ”‚   โ””โ”€โ”€ forked_agent.py    # Background forked agent pattern
โ”œโ”€โ”€ sandbox/
โ”‚   โ”œโ”€โ”€ backend.py         # SandboxBackend protocol (15+ methods)
โ”‚   โ”œโ”€โ”€ local.py           # LocalBackend (pathlib + asyncio)
โ”‚   โ”œโ”€โ”€ memory.py          # InMemoryBackend (VFS for testing)
โ”‚   โ””โ”€โ”€ credential_proxy.py # Credential proxy for sandboxed agents (v6.0)
โ”œโ”€โ”€ trajectory/            # RL-inspired run analysis
โ”‚   โ”œโ”€โ”€ recorder.py        # TrajectoryRecorder, scoring, quality grading
โ”‚   โ”œโ”€โ”€ lessons.py         # PTRL โ€” post-run self-analysis + lesson injection
โ”‚   โ”œโ”€โ”€ verifier.py        # Deterministic verification, task-type detection
โ”‚   โ”œโ”€โ”€ compare.py         # GRPO-inspired multi-sample comparison
โ”‚   โ”œโ”€โ”€ judge.py           # LLM-as-Judge verification
โ”‚   โ””โ”€โ”€ background_memory.py # Continuous memory extraction
โ”œโ”€โ”€ session/
โ”‚   โ”œโ”€โ”€ persistence.py     # Append-only JSONL session events
โ”‚   โ””โ”€โ”€ heartbeat.py       # Session heartbeat + auto-cleanup (v6.0)
โ”œโ”€โ”€ memory/                # AGENTS.md discovery + LLM compaction
โ”œโ”€โ”€ channels/              # Multi-channel messaging (Telegram, WhatsApp, Signal)
โ”œโ”€โ”€ hooks/                 # External shell hook system
โ”œโ”€โ”€ errors/                # Error taxonomy + recovery recipes
โ”œโ”€โ”€ gateway/               # HTTP + WebSocket gateway server
โ”œโ”€โ”€ process/               # Lane-based command queue with barriers (v6.0)
โ”œโ”€โ”€ utils/                 # Atomic file writes (v6.0)
โ””โ”€โ”€ logging/               # Structured diagnostic logging

Built-in Tools

Every agent includes these โ€” no setup needed:

Tool Description
ls List directory with size + modified time
read_file Read file with line numbers + pagination
write_file Write/create file (auto-creates directories)
edit_file Replace text with pattern matching
grep Search โ€” single file or recursive with glob filter
glob Find files by pattern (**/*.py)
execute Shell command execution
tree Recursive directory tree with smart ignoring
diff Unified diff between two files
insert_lines Precise line-level insertion
think Structured reasoning without side effects
web_fetch URL fetching with HTML stripping (50KB cap)
write_todos Plan tasks as a checklist
update_todo Mark plan items complete
task Delegate to a sub-agent with isolated context
ask_user Interactive stdin-based user input
use_skill Load a skill's instructions (when skills exist)

Tool Examples

๐Ÿ“‚ Filesystem โ€” ls, read_file, write_file, edit_file

The agent calls tools by emitting JSON blocks. Here's what happens under the hood when you ask the agent to work with files:

# The agent autonomously emits tool calls like:

# List a directory
{"tool": "ls", "args": {"path": "src/"}}
# โ†’ Returns:  drwxr-xr-x  4.0 KB  2026-02-24  components/
#             -rw-r--r--  1.2 KB  2026-02-24  main.py

# Read a file with pagination
{"tool": "read_file", "args": {"path": "src/main.py", "offset": 0, "limit": 50}}
# โ†’ Returns:  1 | import asyncio
#             2 | from clawagents import create_claw_agent
#             ...

# Write a new file (parent directories auto-created)
{"tool": "write_file", "args": {"path": "src/utils/helpers.py", "content": "def greet(name):\n    return f'Hello, {name}!'"}}
# โ†’ Returns:  โœ… Wrote 45 bytes to src/utils/helpers.py

# Edit an existing file by pattern match
{"tool": "edit_file", "args": {
    "path": "src/main.py",
    "old": "print('hello')",
    "new": "print('Hello, World!')"
}}
# โ†’ Returns:  โœ… 1 replacement made in src/main.py
๐Ÿ” Search โ€” grep, glob
# Recursive grep across all Python files
{"tool": "grep", "args": {"pattern": "TODO", "path": "src/", "include": "*.py"}}
# โ†’ Returns:  src/agent.py:42:  # TODO: add retry logic
#             src/tools/web.py:15:  # TODO: handle redirects

# Single-file search
{"tool": "grep", "args": {"pattern": "class.*Tool", "path": "src/tools/registry.py"}}
# โ†’ Returns:  15: class ToolResult:
#             24: class Tool(Protocol):

# Find files by pattern
{"tool": "glob", "args": {"pattern": "**/*.md", "path": "."}}
# โ†’ Returns:  ./README.md (15.3 KB)
#             ./docs/ARCHITECTURE.md (4.1 KB)
#             ./AGENTS.md (892 B)
โšก Shell Execution
# Run any shell command
{"tool": "execute", "args": {"command": "python -m pytest tests/ -v"}}
# โ†’ Returns full stdout/stderr with exit code

# With custom timeout (in milliseconds)
{"tool": "execute", "args": {"command": "pip install requests", "timeout": 60000}}

# Dangerous commands are auto-blocked
{"tool": "execute", "args": {"command": "rm -rf /"}}
# โ†’ Error: Blocked potentially destructive command
๐Ÿง  Think โ€” structured reasoning
# The agent can reason without side effects
{"tool": "think", "args": {
    "thought": "The user wants me to refactor the database layer. Let me plan: 1) Read the current schema, 2) Identify coupled components, 3) Extract a repository pattern, 4) Update tests."
}}
# โ†’ [Thought recorded] โ€” no files touched, no commands run

This reduces unnecessary tool calls by giving the agent a structured space to plan.

๐Ÿ“‹ Planning โ€” write_todos, update_todo
# Create a structured plan
{"tool": "write_todos", "args": {
    "todos": ["Read the existing codebase", "Fix the auth bug", "Add unit tests", "Update docs"]
}}
# โ†’ ## Progress: 0/4 complete
#   0. [ ] Read the existing codebase
#   1. [ ] Fix the auth bug
#   2. [ ] Add unit tests
#   3. [ ] Update docs

# Mark steps complete as you go
{"tool": "update_todo", "args": {"index": 0}}
# โ†’ ## Progress: 1/4 complete
#   0. [x] Read the existing codebase
#   1. [ ] Fix the auth bug
#   ...
๐Ÿค– Sub-agent delegation
# Delegate to a fresh sub-agent with isolated context
{"tool": "task", "args": {
    "description": "Analyze all Python files in src/ and create a summary of the module structure",
    "max_iterations": 10
}}
# โ†’ [Sub-agent completed: 6 tool calls, 4 iterations]
#   The src/ directory contains 3 modules: ...

# With named specialized sub-agents (configured at creation)
{"tool": "task", "args": {
    "description": "Review this pull request for security issues",
    "agent": "security-reviewer"
}}

Registering named sub-agents:

from clawagents import create_claw_agent
from clawagents.tools.subagent import SubAgentSpec

agent = create_claw_agent(
    "gemini-3-flash",
    subagents=[
        SubAgentSpec(
            name="researcher",
            description="Deep research on a topic",
            system_prompt="You are a thorough researcher. Always cite sources.",
            max_iterations=15,
        ),
        SubAgentSpec(
            name="coder",
            description="Write and test code",
            system_prompt="You are a senior engineer. Write clean, tested code.",
            max_iterations=10,
        ),
    ],
)
๐ŸŒ Web Fetch
# Fetch and read a web page (HTML stripped automatically)
{"tool": "web_fetch", "args": {"url": "https://docs.python.org/3/library/asyncio.html"}}
# โ†’ [200] https://docs.python.org/3/library/asyncio.html
#   asyncio โ€” Asynchronous I/O ...

# Fetch a JSON API
{"tool": "web_fetch", "args": {"url": "https://api.github.com/repos/python/cpython", "timeout": 10}}
# โ†’ Returns raw JSON response

Custom Tools

Create your own tools by implementing the Tool protocol:

from clawagents import create_claw_agent
from clawagents.tools.registry import Tool, ToolResult

class DatabaseQueryTool:
    name = "query_db"
    description = "Run a read-only SQL query against the application database."
    parameters = {
        "sql": {"type": "string", "description": "The SQL SELECT query", "required": True},
        "limit": {"type": "number", "description": "Max rows to return. Default: 100"},
    }

    async def execute(self, args):
        sql = args.get("sql", "")
        limit = int(args.get("limit", 100))
        # ... your database logic here ...
        rows = await run_query(sql, limit=limit)
        return ToolResult(success=True, output=format_table(rows))

# Register custom tools alongside built-ins
agent = create_claw_agent("gpt-5", tools=[DatabaseQueryTool()])

You can also wrap LangChain tools directly:

from langchain_community.tools import WikipediaQueryRun

agent = create_claw_agent("gpt-5", tools=[WikipediaQueryRun()])
# LangChain tools are automatically adapted via LangChainToolAdapter

Skills System

Skills are reusable instruction sets that teach the agent domain-specific knowledge โ€” without polluting the system prompt. They use a progressive disclosure pattern: the agent loads skill instructions on demand via the use_skill tool.

Skill Directory Structure

your-project/
โ”œโ”€โ”€ skills/                  # Auto-discovered (or .skills/, skill/, .skill/, Skills/)
โ”‚   โ”œโ”€โ”€ code_review/
โ”‚   โ”‚   โ””โ”€โ”€ SKILL.md         # โ† Skill defined as a folder + SKILL.md
โ”‚   โ”œโ”€โ”€ sql_expert.md         # โ† Skill defined as a single .md file
โ”‚   โ””โ”€โ”€ deploy_checklist.md
โ”œโ”€โ”€ AGENTS.md                 # Project memory (auto-injected)
โ””โ”€โ”€ src/
    โ””โ”€โ”€ ...

Writing a Skill

Every skill is a Markdown file with optional YAML frontmatter:

Example 1 โ€” skills/code_review/SKILL.md

---
name: code_review
description: "Perform thorough code reviews following team standards"
allowed-tools: read_file grep glob think
---

# Code Review Skill

When reviewing code, follow these steps:

## 1. Structure Check
- Verify the file follows our module pattern (one class per file)
- Check imports are grouped: stdlib โ†’ third-party โ†’ local
- Ensure `__init__.py` exports are up to date

## 2. Logic Review
- Look for unhandled edge cases (empty inputs, None values)
- Verify error messages are actionable
- Check that async functions are properly awaited

## 3. Security
- No hardcoded secrets or API keys
- SQL queries use parameterized statements
- User input is sanitized before use

## 4. Output Format
Provide your review as:
- โœ… **Approved** โ€” no issues found
- โš ๏ธ **Changes requested** โ€” list specific issues with file:line references
- ๐Ÿšซ **Blocked** โ€” critical issues that must be fixed

Example 2 โ€” skills/sql_expert.md (single-file skill)

---
name: sql_expert
description: "Write optimized SQL queries for PostgreSQL"
allowed-tools: execute read_file think
---

# SQL Expert

You are a PostgreSQL expert. When writing queries:

## Rules
1. Always use explicit `JOIN` syntax (never implicit joins in WHERE)
2. Use CTEs (`WITH` clauses) for complex multi-step queries
3. Add `EXPLAIN ANALYZE` when the user asks about performance
4. Use parameterized queries โ€” never interpolate user values
5. Default to `LIMIT 100` unless the user specifies otherwise

## Patterns

### Pagination
Use keyset pagination for large tables:
```sql
SELECT * FROM events
WHERE id > :last_seen_id
ORDER BY id
LIMIT 50;

Aggregation

Always include the raw count alongside percentages:

SELECT
    status,
    COUNT(*) AS n,
    ROUND(100.0 * COUNT(*) / SUM(COUNT(*)) OVER (), 1) AS pct
FROM orders
GROUP BY status
ORDER BY n DESC;

**Example 3 โ€” `skills/deploy_checklist.md`**

```markdown
---
name: deploy_checklist
description: "Step-by-step production deployment checklist"
---

# Deployment Checklist

Before deploying to production, complete every step:

- [ ] All tests pass: `pytest tests/ -v`
- [ ] No lint errors: `ruff check src/`
- [ ] Version bumped in `pyproject.toml`
- [ ] CHANGELOG.md updated
- [ ] Docker image builds: `docker build -t app:latest .`
- [ ] Smoke test on staging environment
- [ ] Database migrations reviewed and tested
- [ ] Rollback plan documented

How Skills Work at Runtime

# Skills are auto-discovered from ./skills/ directory
agent = create_claw_agent("gemini-3-flash")

# Or specify custom skill directories
agent = create_claw_agent("gpt-5", skills=["./my-skills", "./shared-skills"])

When skills are available, the agent gets two additional tools:

# 1. List available skills
{"tool": "list_skills", "args": {}}
# โ†’ Available skills (3):
#   - **code_review**: Perform thorough code reviews following team standards
#     โ†’ Allowed tools: read_file, grep, glob, think
#   - **sql_expert**: Write optimized SQL queries for PostgreSQL
#     โ†’ Allowed tools: execute, read_file, think
#   - **deploy_checklist**: Step-by-step production deployment checklist

# 2. Load a specific skill's instructions
{"tool": "use_skill", "args": {"name": "sql_expert"}}
# โ†’ Returns the full skill content, injected into the agent's context

The agent decides on its own when to use a skill. If you ask it to "write a query to find all overdue orders," and a sql_expert skill exists, it will load the skill first, then write the query following those rules.


API Reference

create_claw_agent(model, instruction, ...)

All parameters are optional โ€” zero-config usage (create_claw_agent()) works if you have a .env with at least one API key.

Model & Provider

Param Type Default Required? Description
model str | LLMProvider | None None No Model name (e.g. "gpt-5-mini", "gemini-3-flash", "llama3.1"), a pre-built LLMProvider instance, or None to auto-detect from env
api_key str | None None No API key. Auto-routed to OpenAI or Gemini based on model name. Falls back to OPENAI_API_KEY / GEMINI_API_KEY env vars. For local models: omit entirely (a placeholder is used automatically)
base_url str | None None No Custom endpoint URL for OpenAI-compatible APIs. Set this for Azure OpenAI, AWS Bedrock (via gateway), Ollama, vLLM, LM Studio, or any OpenAI-compatible server. Falls back to OPENAI_BASE_URL env var. Omit to use api.openai.com
api_version str | None None No API version string. Only needed for Azure OpenAI (e.g. "2024-12-01-preview"). Falls back to OPENAI_API_VERSION env var. Ignored for all other providers

Agent Behavior

Param Type Default Required? Description
instruction str | None None No System prompt โ€” what the agent should do and how it should behave
tools list | None None No Additional tools to register. Built-in tools (filesystem, exec, grep, etc.) are always included
skills str | list | None auto-discover No Skill directories to load. Default: checks ./skills, ./.skills. Bundled skills (ByteRover, OpenViking) are always included when eligible.
memory str | list | None auto-discover No Memory files to inject into system prompt. Default: checks ./AGENTS.md, ./CLAWAGENTS.md
sandbox SandboxBackend LocalBackend() No Pluggable sandbox backend for file/shell operations. Use InMemoryBackend for testing
streaming bool True No Enable streaming responses
use_native_tools bool True No Use provider native function calling. Set False for text-based JSON tool calls
on_event callable | None None No Callback for agent events (tool calls, errors, context messages, etc.)

LLM Tuning

Param Type Default Required? Description
context_window int | None env CONTEXT_WINDOW / 1000000 No Token budget. When messages exceed this, older turns are compacted
max_tokens int | None env MAX_TOKENS / 8192 No Max output tokens per LLM response. Sent as max_completion_tokens (OpenAI) or max_output_tokens (Gemini)
temperature float | None env TEMPERATURE / 0.0 No LLM sampling temperature. Automatically overridden for reasoning models (o1/o3/o4-mini, gpt-5/gpt-5-mini/gpt-5-turbo โ†’ 1.0). Non-reasoning models (gpt-5-nano, gpt-5-micro, gpt-4o) respect the configured value
max_iterations int | None env MAX_ITERATIONS / 200 No Max tool rounds before the agent stops and returns

PTRL & Trajectory

Param Type Default Required? Description
trajectory bool | None env CLAW_TRAJECTORY / False No Enable trajectory logging. Records every turn as NDJSON to .clawagents/trajectories/ and scores each run
rethink bool | None env CLAW_RETHINK / False No Enable consecutive-failure detection. Injects a "rethink" prompt with adaptive threshold after repeated tool failures
learn bool | None env CLAW_LEARN / False No Enable Prompt-Time Reinforcement Learning. Includes: post-run self-analysis, pre-run lesson injection, LLM-as-Judge verification (Feature G), and thinking token preservation (Feature H). Implies trajectory=True
preview_chars int | None env CLAW_PREVIEW_CHARS / 120 No Max chars for tool-output previews in trajectory logs
response_chars int | None env CLAW_RESPONSE_CHARS / 500 No Max chars for LLM response text in trajectory records

Priority: Explicit parameter > environment variable > default value. You never need to set both.

Hooks & Access Control

agent = create_claw_agent("gemini-3-flash", instruction="Code reviewer")

# Block dangerous tools at runtime
agent.block_tools("execute", "write_file")

# Or whitelist only safe tools
agent.allow_only_tools("read_file", "ls", "grep", "glob")

# Inject context into every LLM call
agent.inject_context("Always respond in Spanish")

# Limit tool output size
agent.truncate_output(3000)

Advanced โ€” raw hooks:

agent.before_llm = lambda messages: messages        # modify messages before LLM
agent.before_tool = lambda name, args: True          # return False to block
agent.after_tool = lambda name, args, result: result # modify tool results

Instance Methods

Method Description
await agent.invoke(task, max_iterations=None) Run the agent on a task. Returns AgentState with .result, .status, .iterations, .tool_calls
await agent.compare(task, n_samples=3, max_iterations=None) Run the task N times and return the best result based on objective scoring (GRPO-inspired). Returns {"best_result", "best_score", "best_index", "all_scores"}
agent.block_tools(*names) Block specific tools at runtime
agent.allow_only_tools(*names) Whitelist-only mode โ€” all other tools blocked
agent.inject_context(text) Inject extra context into every LLM call
agent.truncate_output(max_chars) Limit tool output size

Auto-Discovery

The agent factory automatically discovers project files:

What Default locations checked
Memory ./AGENTS.md, ./CLAWAGENTS.md
Skills ./skills, ./.skills, ./skill, ./.skill, ./Skills. Bundled skills are auto-included based on eligibility (see below).

Bundled Skills

ClawAgents ships with two complementary bundled skills that work together:

Skill Purpose Prerequisite Auto-enabled?
ByteRover Write decisions, patterns, and rules to local Markdown files Node/npx (brv runs via npx byterover-cli) Always
OpenViking Read context from repos, docs, and large knowledge bases with tiered L0/L1/L2 loading pip install openviking + running openviking-server Only when ov CLI is on PATH

How they complement each other:

  • ByteRover is a fast, serverless notebook for the agent. Use brv curate to persist decisions ("We chose Postgres for ACID compliance") and brv query to recall them. No infrastructure needed โ€” context is stored as Markdown in .brv/context-tree/.
  • OpenViking is a structured context database. Use ov add-resource to ingest entire repos or doc sites, then ov find for semantic search across all indexed content. Results are organized in a virtual filesystem (viking://) with three tiers: L0 (abstract, ~100 tokens), L1 (overview, ~2k tokens), L2 (full content) โ€” the agent loads only what it needs, saving tokens.

Typical workflow: OpenViking retrieves context โ†’ agent works on the task โ†’ ByteRover curates the decisions made.

OpenViking prerequisites:

  1. Install: pip install openviking --upgrade
  2. Configure: create ~/.openviking/ov.conf with embedding model and VLM settings (see OpenViking docs)
  3. Start server: openviking-server
  4. The ov CLI must be on your PATH โ€” the skill auto-enables when detected |

Override with explicit paths:

agent = create_claw_agent(
    "gpt-5",
    memory="./docs/AGENTS.md",
    skills=["./my-skills", "./shared-skills"]
)

Memory & Context Management

Project Memory

Loads AGENTS.md files and injects content into every LLM call. Use for project-level context and conventions.

Auto-Compaction

When the conversation exceeds 75% of CONTEXT_WINDOW:

  1. Full history offloaded to .clawagents/history/compacted_*.json
  2. Older messages summarized into [Compacted History]
  3. Last 20 messages kept intact

This provides unlimited conversation length with full audit trail preservation.


Gateway Server

Launch an HTTP server with one line:

from clawagents.gateway import start_gateway

start_gateway(port=3000)            # binds to 127.0.0.1 by default (loopback only)
start_gateway(port=3000, host="0.0.0.0")  # explicit LAN exposure โ€” REQUIRES auth

Bind & auth

The gateway binds to 127.0.0.1 (loopback) by default in v6.2+. To expose it on the LAN, set GATEWAY_HOST=0.0.0.0 (or pass host=), and also set GATEWAY_API_KEY=<secret> to require Bearer auth. Starting on a non-loopback address without an API key prints a loud warning at startup โ€” anyone on the network can otherwise hit /chat, /chat/stream, and /ws.

Endpoints

Endpoint Method Description
/chat POST Synchronous agent invocation
/chat/stream POST SSE streaming (events: queued, started, agent, done, error)
/queue GET Queue status for all lanes
/health GET Health check

Lane-Based Concurrency

4 lanes with configurable max_concurrent per lane:

  • main โ€” primary user requests
  • cron โ€” scheduled tasks
  • subagent โ€” sub-agent delegation
  • nested โ€” nested sub-agent calls

Trust Boundaries & Hardening

A few surfaces are deliberately powerful โ€” they exist for trusted operators, and you should treat them as such when running ClawAgents in environments with untrusted prompts or LAN exposure:

  • exec_shell tool โ€” runs arbitrary commands inside the configured sandbox. Pair with the LocalBackend(cwd=...) constraint and ideally a containerized runtime; the tool's blocklist is a guardrail, not a security boundary.
  • External hooks (CLAW_FEATURE_EXTERNAL_HOOKS=1, CLAW_HOOK_*) execute shell commands defined in your env or .clawagents/hooks.json. Anyone who controls those configs has code execution. Treat hooks as trusted-only.
  • web_fetch tool โ€” refuses loopback / RFC1918 / link-local / multicast IPs by default to block SSRF. Set CLAWAGENTS_WEB_ALLOW_PRIVATE=1 only in trusted dev environments.
  • Gateway โ€” defaults to loopback (127.0.0.1) bind. Set GATEWAY_API_KEY if you bind to 0.0.0.0.

Sandbox Backends

ClawAgents uses a pluggable sandbox protocol for all file and shell operations:

from clawagents.sandbox import InMemoryBackend, LocalBackend

# Production: real filesystem
agent = create_claw_agent("gpt-5", sandbox=LocalBackend())

# Testing: pure in-memory VFS
mem = InMemoryBackend()
mem.seed({"src/main.py": "print('hello')", "README.md": "# My Project"})
agent = create_claw_agent("gpt-5", sandbox=mem)
snapshot = mem.snapshot()  # deterministic state capture

Environment Variables

All environment variables are optional. They serve as defaults when the corresponding create_claw_agent() parameter is not provided. Explicit parameters always take priority.

General

Variable Default Required? Description
CLAWAGENTS_ENV_FILE (unset) No Explicit path to a .env file. Overrides default cwd/.env discovery. Useful for CI, Docker, or multi-project setups

Provider & Model โ€” set at least one API key (or OPENAI_BASE_URL for local models)

Variable Default Required? Description
PROVIDER auto-detect No Hint: "openai" or "gemini". Auto-detected from which API key is set
OPENAI_API_KEY โ€” Yes (for OpenAI/Azure) OpenAI or Azure API key. Not needed for local models โ€” when OPENAI_BASE_URL is set, a placeholder is used automatically
OPENAI_MODEL gpt-5-nano No Model name, Azure deployment name, or local model ID (e.g. llama3.1)
OPENAI_BASE_URL (unset) No Custom endpoint for OpenAI-compatible APIs: Azure, Bedrock gateway, Ollama, vLLM, LM Studio. Omit to use api.openai.com
OPENAI_API_VERSION (unset) No Azure only. API version string (e.g. 2024-12-01-preview). Ignored by all other providers
GEMINI_API_KEY โ€” Yes (for Gemini) Google Gemini API key
GEMINI_MODEL gemini-3-flash-preview No Gemini model name

LLM Tuning

Variable Default Required? Description
STREAMING 1 No 1 = streaming enabled, 0 = disabled
CONTEXT_WINDOW 1000000 No Token budget. Older messages are compacted when exceeded
MAX_TOKENS 8192 No Max output tokens per response (max_completion_tokens for OpenAI, max_output_tokens for Gemini)
TEMPERATURE 0.0 No Sampling temperature. Auto-overridden for reasoning models (o-series + gpt-5/gpt-5-mini/gpt-5-turbo โ†’ 1.0). Non-reasoning models (gpt-5-nano, gpt-5-micro, gpt-4o) use the configured value
MAX_ITERATIONS 200 No Max tool rounds before the agent stops. Override per-run: agent.invoke(task, max_iterations=N)

PTRL & Trajectory Flags โ€” all off by default, opt-in with 1/true/yes

Variable Default Required? Description
CLAW_TRAJECTORY 0 No Enable trajectory logging. Records every turn + scores each run to .clawagents/trajectories/
CLAW_RETHINK 0 No Enable consecutive-failure detection + adaptive rethink injection
CLAW_LEARN 0 No Enable full PTRL: lesson extraction, injection, LLM-as-Judge, and thinking token preservation. Implies CLAW_TRAJECTORY=1
CLAW_PREVIEW_CHARS 120 No Max chars for tool-output previews in trajectory logs
CLAW_RESPONSE_CHARS 500 No Max chars for LLM response text in trajectory records

Claude Code Features โ€” mostly off by default, opt-in with 1/true/yes

Variable Default Required? Description
CLAW_FEATURE_MICRO_COMPACT 1 No Aggressively clear old tool result contents to save context
CLAW_FEATURE_FILE_SNAPSHOTS 1 No Safely copy files to .clawagents/snapshots/ before writing
CLAW_FEATURE_CACHE_TRACKING 0 No Extract and log detailed Anthropic/OpenAI prompt cache stats
CLAW_FEATURE_TYPED_MEMORY 0 No Parse YAML frontmatter in AGENTS.md to classify memory types
CLAW_FEATURE_WAL 0 No Persistent Write-Ahead Logging to .clawagents/wal.jsonl (crash recovery)
CLAW_FEATURE_PERMISSION_RULES 0 No Enforce declarative glob-based Allow/Deny execution bounds
CLAW_FEATURE_BACKGROUND_MEMORY 0 No Background thread extracting agent state/metadata implicitly
CLAW_FEATURE_FORKED_AGENTS 0 No Enable the run_forked_agent sandboxed sub-agent API
CLAW_FEATURE_COORDINATOR 0 No Enable the run_coordinator swarm routing orchestration mode

v5.28.0 Features โ€” inspired by claw-code-main (Rust reference)

Variable Default Required? Description
CLAW_FEATURE_CACHE_BOUNDARY 1 No Split system prompt at __CACHE_BOUNDARY__ for Anthropic prompt caching. Static prefix cached, dynamic suffix fresh each turn.
CLAW_FEATURE_SESSION_PERSISTENCE 0 No Save sessions as append-only JSONL to .clawagents/sessions/. Enables --sessions and --resume.
CLAW_FEATURE_ERROR_TAXONOMY 1 No Classify LLM/tool errors into 7 discrete classes (context_window, provider_auth, provider_rate_limit, etc.) with recovery hints.
CLAW_FEATURE_EXTERNAL_HOOKS 0 No Run shell hooks before/after tool calls and LLM calls. Config via .clawagents/hooks.json or CLAW_HOOK_* env vars.

External Hook Env Vars (requires CLAW_FEATURE_EXTERNAL_HOOKS=1)

Variable Description
CLAW_HOOK_PRE_TOOL_USE Shell command run before each tool. Receives JSON on stdin, can block or modify args.
CLAW_HOOK_POST_TOOL_USE Shell command run after each tool. Can modify results.
CLAW_HOOK_PRE_LLM Shell command run before each LLM call. Can inject extra messages.
CLAW_HOOK_POST_LLM Shell command run after each LLM response. Fire-and-forget logging.

Testing

# Install with dev dependencies
pip install -e ".[dev]"

# Run all tests
python -m pytest tests/ -v

# Run benchmarks (requires API keys)
python -m pytest tests/ -v -m benchmark

Changelog

v6.3.0 โ€” Sandbox & Security Hardening, Strict Type Checking

Security/correctness release. Eleven bugs fixed across both the Python and TypeScript ports, plus a full mypy cleanup. All tests green: 334 passed, mypy clean (0 errors, exit 0).

Security fixes:

  • Sandbox escape via symlink (TS) โ€” LocalBackend.safePath was lexical-only (path.resolve), so an agent that ran ln -s /etc evil could read /etc/* through the symlink. Now uses realpathSync for both cwd and resolved paths so symlinks are followed before the containment check. Python was already safe via Path.resolve().
  • SSRF gap (TS) โ€” web_fetch's IPv6 link-local check only matched fe8X, missing fe9X/feaX/febX. Now matches the full fe80::/10 range (/^fe[89ab]/i). Python uses ipaddress.is_link_local, no change needed.
  • > /dev/null blocked legitimate use (both) โ€” BLOCKED_PATTERNS had "> /dev/null" (typo for "> /dev/sd"), which blocked the common shell idiom cmd > /dev/null. Removed.
  • rm / regex parity (TS) โ€” DANGEROUS_RE was missing the * quantifier on the flag group, so rm / (no flags) slipped past while Python's regex blocked it. Aligned.
  • wget http / curl http parity (TS) โ€” added to TS BLOCKED_PATTERNS to match Python. Agents should use the web_fetch tool (with SSRF guards) for HTTP, not raw shell utilities.

Correctness fixes:

  • Multimodal system message crashed context shedding (Py) โ€” _preflight_context_check called .replace() and string-slicing on system messages without checking if content was a list[dict] (multimodal). Now guards each tier with isinstance(content, str) and emits a warn event if the system message is multimodal.
  • Arbitrary role from pre_llm hook (Py) โ€” external hooks could pass any string as role, blowing up Pydantic validation in LLMMessage. Now coerces unknown roles to "user" and emits a warn.
  • Parallel native tool-call indexing (Py) โ€” when before_tool rejected a call OR returned updated_args, native_tool_call_objects was indexed by approved-list index (off-by-one) and the identity check tc is approved_calls[i] failed (because updated_args constructs a new ParsedToolCall). Tool-call IDs sent back to the LLM were wrong, causing native function-calling failures. Now tracks (orig_idx, call) pairs through the approval loop.
  • Subagent env-mutation race (Py) โ€” concurrent subagent runs with credential_proxy enabled raced on os.environ. The second run captured the first's overrides as its "original" env, then stamped them back into place after the first run had already stopped its proxy. Wrapped the env-mutate / run / env-restore window in an asyncio.Lock. No-proxy path is unaffected.
  • classify_error rejected BaseException (Py) โ€” asyncio.CancelledError and similar inherit from BaseException, not Exception. Widened classify_error, _extract_status, and ErrorDescriptor.original to accept BaseException.
  • Gemini provider None parts iteration (Py) โ€” streaming chunks could surface None for candidate.content.parts after a hasattr check that says only the attribute exists. Switched to getattr(getattr(_cand, "content", None), "parts", None) and explicit truthiness check.

Type checking:

  • Full mypy cleanup: 46 errors โ†’ 0. Real bugs fixed (None-iter, AsyncOpenAI/AsyncAzureOpenAI mismatch, missing telegram updater check, kwargs widening). False positives addressed by renaming reused variables, adding explicit dict[str, Any] annotations on union-typed locals, and parameters: Dict[str, Dict[str, Any]] annotations on tool implementations to satisfy the Tool protocol.
  • Added [tool.mypy] block to pyproject.toml with warn_unused_ignores = true and ignore_missing_imports = true. Run python -m mypy โ€” clean run shows Success: no issues found in 72 source files. Mypy now exits non-zero on errors so CI can gate on it.

Regression coverage added:

  • tests/test_exec_safety.py โ€” denylist behavior (legitimate idioms allowed, destructive patterns blocked)
  • tests/test_agent_loop_bugs.py โ€” multimodal shedding paths + role coercion
  • tests/test_parallel_native_indexing.py โ€” both rejection-skip and updated-args indexing paths
  • tests/test_subagent_env_race.py โ€” concurrent credential-proxy runs don't corrupt env

v6.2.1 โ€” Release Hardening, Redirect-Safe web_fetch, and Parity Smokes

Patch release focused on making the v6.2 line safer to install, test, and operate.

  • Redirect-aware SSRF protection โ€” web_fetch disables automatic redirects and manually revalidates every hop before network I/O. Public-to-private redirects to loopback, RFC1918, link-local, reserved, multicast, or cloud metadata IPs are refused by default.
  • Hermetic SSRF regression tests โ€” added tests/test_web_fetch_ssrf.py covering public-to-private redirects, redirect loops, direct private IP refusal, and legitimate public-to-public redirects.
  • Local-source pytest resolution โ€” pyproject.toml now sets pythonpath = ["src"] and testpaths = ["tests"], so local test runs cannot accidentally import an older installed wheel from site-packages.
  • Cross-package parity smoke โ€” added scripts/smoke_gemma4.py, mirroring the TypeScript smoke script and printing provider, base URL, and stored model for Ollama/Gemma4, gpt-5.4, gemini-3.1-pro, and claude-opus-4-6.
  • Release verification โ€” python -m pytest reports 319 passed, 2 skipped; the SSRF-specific suite reports 5 passed.

v6.2.0 โ€” OpenAI-Agents Parity, Ollama/Gemma4 First-Class Routing, 63 Model Profiles

A substantial additive release. Everything is backward compatible โ€” existing create_claw_agent() calls, env vars, and tool registrations work unchanged.

1. Ten OpenAI-Agents-SDK parity surfaces (all additive, all new modules)

Surface Module What it adds
Run Context clawagents.run_context RunContext carries per-run state, approvals, and arbitrary user data through hooks and tools.
Usage Tracking clawagents.usage Usage + RequestUsage aggregate token/latency stats across turns, providers, and sub-agents.
Lifecycle Hooks clawagents.lifecycle RunHooks / AgentHooks with typed LLMStart/LLMEnd/ToolStart/ToolEnd/AgentStart/AgentEnd/RunStart/RunEnd/Handoff payloads. composite_hooks chains multiple observers without interference.
Guardrails clawagents.guardrails input_guardrail / output_guardrail decorators, GuardrailTripwireTriggered, behavior modes (raise / log / filter).
Stream Events clawagents.stream_events First-class TurnStartedEvent, AssistantDeltaEvent, ToolCallPlannedEvent, ApprovalRequiredEvent, UsageEvent, GuardrailTrippedEvent, FinalOutputEvent, ErrorStreamEvent. Consumable via on_stream_event callback.
Retry Policy clawagents.retry RetryPolicy dataclass + DEFAULT_RETRY_POLICY. Exponential backoff with jitter, per-error-class overrides.
Function Tools clawagents.function_tool @function_tool decorator auto-derives JSON Schema from Python type hints. Zero boilerplate.
Session Backends clawagents.session Unified Session protocol with InMemorySession, JsonlFileSession, SQLiteSession. Drop-in persistence.
Structured Outputs output_type= arg on create_claw_agent / agent.invoke Return typed objects via Pydantic model, dataclass, dict, list, or str. Coerced after run completes; failures emit a warn stream event.
Tool Approval approval_handler= arg + ApprovalRequiredEvent HITL gate โ€” async callable receives {tool, args} and returns True / False / a redirect dict. Integrates with ApprovalRequiredEvent for streaming UIs.

2. Ollama & Gemma 4 first-class routing

create_provider() now auto-routes 24 Ollama-family prefixes to http://localhost:11434/v1 with no config needed. Use either the bare tag (gemma4:e4b) or the explicit routing form (ollama/gemma4:e4b).

Family Examples Routed to
Gemma 4 gemma4, gemma4:e2b, gemma4:e4b, gemma4:26b, gemma4:31b Ollama @ :11434/v1
Gemma 3 / 3n / 2 gemma3, gemma3n:e4b, gemma2, gemma Ollama @ :11434/v1
Llama / Qwen / Mistral / Phi / Deepseek / Codellama llama3, qwen2, mistral, mixtral, phi4, deepseek-r1, codellama, โ€ฆ Ollama @ :11434/v1
Explicit routing ollama/<any-tag> Ollama @ :11434/v1 (prefix stripped)

Override with OPENAI_BASE_URL if you run Ollama on a different host/port. API key is auto-set to the placeholder "ollama".

3. 63 model profiles + model-aware context budget

The _MODEL_PROFILES table now covers frontier (GPT-5.4 โ†’ 400K, Gemini 3.1 โ†’ 1M, Claude 4.6 Opus), Ollama (Gemma4 e2b/e4b โ†’ 128K, 26b/31b โ†’ 256K), and a long tail of OSS variants. _resolve_context_budget() walks insertion order for deterministic prefix matching (most-specific first).

4. Cross-package parity โ€” the TypeScript sibling clawagents (see x1jiang/clawagents) has the identical 24-entry Ollama prefix list, 63-entry model profile table with the same (window, ratio) values, and the same create_provider routing logic. Parity can be exercised manually with the matching smoke scripts in each repo (clawagents_py/scripts/smoke_gemma4.py and clawagents/scripts/smoke-gemma4.ts); both print the same provider, base URL and stored model for gemma4:*, ollama/..., gpt-5.4, gemini-3.1-pro and claude-opus-4-6. The GitHub Actions workflow added in v6.2.1 runs pytest, python -m build, and twine check on every push.

5. Quality / debug pass

  • Async agent loop hardening โ€” new turn-started events, tighter cancellation semantics, cleaner state hand-off to sub-agents.
  • Added tests/test_openai_agents_surfaces.py โ€” full coverage for RunContext, Usage, Hooks, Guardrails, StreamEvents, Retry, FunctionTool, Session backends.
  • Test suite: 314 passed, 2 skipped.

New public exports (from clawagents): RunContext, ApprovalRecord, Usage, RequestUsage, RunHooks, AgentHooks, composite_hooks, InputGuardrail, OutputGuardrail, input_guardrail, output_guardrail, GuardrailBehavior, GuardrailResult, GuardrailTripwireTriggered, StreamEvent (+ 10 concrete event types), stream_event_from_kind, RetryPolicy, DEFAULT_RETRY_POLICY, function_tool, InMemorySession, JsonlFileSession, SQLiteSession.

v6.1.1 โ€” Credential Isolation & Lazy Tool Provisioning

Feature Description
Credential Isolation execute tool strips sensitive env vars (OPENAI_API_KEY, GEMINI_API_KEY, ANTHROPIC_API_KEY, etc.) from subprocess environment. Claude-generated code can no longer read API keys via env or os.environ.
Lazy Tool Provisioning Sandbox-backed tools (filesystem, exec, advanced-fs, web) defer module import to first execute() call. Schema is available immediately for the LLM. Reduces startup overhead.

v6.1.0 โ€” Advisor Model: Smart Model Guides Cheap Model

Pair a stronger "advisor" model with a cheaper "executor" model. The executor runs every turn; the advisor is consulted 2-3 times per task for strategic guidance. Cross-provider supported โ€” any model can advise any other model.

Feature Description
Advisor Model New advisor_model config field. Set it and the agent gets smarter. Don't set it, nothing changes. Fully backward compatible.
Three Trigger Points (1) After initial orientation, before planning. (2) When stuck (consecutive failures). (3) Before declaring done.
Cross-Provider Mix providers freely: gpt-5.4-nano executor + claude-opus-4-6 advisor, or any combination.
CLI Flag --advisor MODEL flag for one-line usage.
Env Config ADVISOR_MODEL, ADVISOR_API_KEY, ADVISOR_MAX_CALLS env vars.
agent = create_claw_agent(
    "gpt-5.4-nano",
    advisor_model="gpt-5.4",
)

v6.0.0 โ€” Production Hardening: 17 Improvements

High Priority

Feature Description
Native Tool Call Patching (H1) _patch_dangling_tool_calls now handles native function calling (tool_calls_meta), not just text-mode JSON. Injects synthetic cancelled responses for orphaned tool_call IDs. Prevents 400 API errors in HITL scenarios.
Three-Tier Provider Fallback (H2) New FallbackProvider wraps any LLM with primary โ†’ named fallback โ†’ global fallback chain. Quarantines providers after consecutive failures, periodic health-check restores. Config via fallback_models param or CLAWAGENTS_FALLBACK_MODELS env var.
Credential Proxy (H3) New CredentialProxy โ€” local HTTP proxy that injects API keys into outbound requests so sandboxed sub-agents never see raw credentials. Opt-in via CLAW_FEATURE_CREDENTIAL_PROXY=1.
Rich Hook Result Model (H4) BeforeToolHook now accepts HookResult return (backward-compatible with bool). Hooks can block with reason, redirect args, inject messages. New HookResult dataclass exported from public API.
Fraction-Based Summarization (H5) Soft-trim threshold now derives from per-model budget_ratio instead of hardcoded 0.60. GPT=0.60, Gemini=0.675, Claude=0.6375. Auto-adapts to any model's context window.
Lazy Static Tool Registry (H7) New LazyTool class + ToolRegistry.register_lazy(). Tools are imported only on first execute() call. Fast startup with large tool sets.

Medium Priority

Feature Description
Subagent State Isolation (M1) EXCLUDED_STATE_KEYS prevents parent state (messages, todos, trajectory, lessons, session) from leaking into child sub-agents.
SKILL.md Constraint Documents (M4) Skills now support forbidden-actions, workspace-layout, success-criteria, workflow-steps in YAML frontmatter. Structured constraints for sandboxed code execution.
Pre-Compact Transcript Archival (M5) Before context compaction, full transcript is archived to .clawagents/transcripts/. Opt-in via CLAW_FEATURE_TRANSCRIPT_ARCHIVAL=1.
Atomic File Writes (M7) Trajectory recorder and session persistence now use temp-then-rename pattern via atomic_write_text(). Prevents corruption on crash.
Barrier-Based Scheduling (M8) Command queue now supports barrier entries. Destructive ops wait for active tasks to complete before executing.
Session Heartbeat (M9) New SessionHeartbeat class auto-releases stale sessions after timeout. Resource management for multi-user deployments.
Cross-Provider Test Suite (M10) 14 conformance tests (7 per backend) ensuring LocalBackend and InMemoryBackend both satisfy the SandboxBackend protocol.

New files: providers/fallback.py, sandbox/credential_proxy.py, utils/atomic_write.py, session/heartbeat.py, tests/test_cross_provider.py

New feature flags: transcript_archival (off), credential_proxy (off)

New exports: HookResult, FallbackProvider, CredentialProxy, SessionHeartbeat, LazyTool, atomic_write_text, atomic_write_bytes

v5.28.0 โ€” Error Taxonomy, Prompt Caching, Session Persistence & External Hooks

Four production-grade features ported from the claw-code-main Rust reference implementation:

Feature Description
Prompt Cache Boundary Inserts __CACHE_BOUNDARY__ marker in system prompt. Anthropic provider splits into static (cached via cache_control: ephemeral) + dynamic blocks. Reduces input token costs on multi-turn sessions. ON by default.
Error Taxonomy & Recovery Classifies all LLM/tool errors into 7 discrete classes (context_window, provider_auth, provider_rate_limit, provider_retry_exhausted, provider_internal, provider_transport, runtime_io). Each class has retryable, recovery_hint, and optional failover_model. Structured error events emitted via onEvent. ON by default.
Session Persistence Saves agent sessions as append-only JSONL to .clawagents/sessions/. Events: system_prompt, turn_started, assistant_message, tool_result, usage, turn_completed. New CLI: --sessions (list) and --resume [ID|latest] (continue). Opt-in.
External Hook System Shell commands that run before/after tool execution and LLM calls. Config via .clawagents/hooks.json or CLAW_HOOK_* env vars. Hooks receive JSON on stdin, return JSON on stdout. pre_tool_use can block or modify args. 10s timeout, fail-open. Opt-in.

Also:

  • Anthropic cache token extraction โ€” cache_creation_tokens and cache_read_tokens now populated from both streaming and non-streaming Anthropic responses.
  • AgentState.session_file โ€” New field tracks the session JSONL path when persistence is enabled.
  • New public exports โ€” ErrorClass, ErrorDescriptor, classify_error, get_recovery_recipe, SessionWriter, SessionReader, list_sessions, HooksConfig, ExternalHookRunner, load_hooks_config.

v5.27.3 โ€” Gemini Signature Regression Coverage

  • Gemini signature regression test โ€” Added targeted tests for _serialize_gemini_parts to ensure thought_signature is propagated to sibling parallel function_call parts.
  • Parallel integration test reliability โ€” Fixed integration test fixture validation mismatch so large-output parallel execution is validated correctly.

v5.27.2 โ€” Gemini 3 Thought Signature Fix

  • Gemini 3 Propagation โ€” Propagated thought_signature to all parallel function_call parts in the response, preventing 400 INVALID_ARGUMENT during multi-tool execution.

v5.27.1 โ€” Timeout Bugfix

  • Fixed NameError โ€” Added timeout_s parameter to ClawAgent.invoke to prevent an exception when a global timeout is not provided.

v5.27.0 โ€” Claude Code Architectural Patterns

Ported 10 production-grade architectural patterns from Anthropic's Claude Code directly into ClawAgents. These features are controllable via environment variables or constructor injection:

Feature Description
Micro-Compact Memory Aggressively clears giant tool results to save context.
File History Snapshots Safely backs up files to .clawagents/snapshots/ before writing.
Prompt Cache Tracking Real-time stats on Anthropic/OpenAI prompt cache hits.
Typed Memory Taxonomy Auto-parses project, user, and feedback memories via frontmatter.
Write-Ahead Logging (WAL) Crash-resilient interaction logging.
Granular Permission Rules Define glob-based Allow/Deny execution policies.
Background Memory Extraction Periodically scans conversations and extracts metadata.
Orchestration Access to run_forked_agent and run_coordinator (swarm routing).

v5.26.0 โ€” Bundled OpenViking Skill, Updated ByteRover Skill

Feature Description
OpenViking skill Bundled skills/openviking/SKILL.md teaches the agent to use the ov CLI for tiered context retrieval (L0/L1/L2). Auto-enabled when ov is on PATH
ByteRover skill updated Refreshed to match byterover-cli v1.8.0 โ€” added --headless, --folder, removed obsolete commands
Generic bundled skill loader Skill loader now scans the entire bundled skills/ directory instead of hardcoding individual skills

v5.25.0 โ€” Gemini Streaming Fix

Feature Description
Fix Gemini SDK warning Eliminated "non-text parts in the response" warning by iterating candidates[].content.parts[] instead of accessing the .text property on streaming chunks containing function calls
Consistent text extraction Streaming path now uses the same parts-based extraction as the non-streaming _request_once, filtering out thought parts

v5.24.0 โ€” Zero-Config Channel Auto-Detection

Feature Description
Auto-detect channels from env vars clawagents --serve now reads TELEGRAM_BOT_TOKEN, WHATSAPP_AUTH_DIR, SIGNAL_ACCOUNT from .env and auto-starts the ChannelRouter โ€” zero code required
--doctor channel status clawagents --doctor reports which messaging channels are configured
.env.example updated All channel env vars documented with inline comments
--init scaffold clawagents --init generates .env with channel variables pre-commented

v5.23.0 โ€” WebSocket Gateway, Multi-Channel Messaging (Telegram, WhatsApp, Signal)

Full multi-platform messaging support inspired by OpenClaw's channel architecture:

Feature Description
WebSocket gateway FastAPI native WebSocket endpoint at /ws alongside existing HTTP. Methods: chat.send (streaming events), chat.history, chat.inject, ping. Auth via ?token= query param
Channel adapter interface ChannelAdapter protocol + ChannelMessage dataclass โ€” standard contract for any messaging platform
Telegram adapter Uses python-telegram-bot. Config: {"bot_token": "..."}
WhatsApp adapter Baileys subprocess (Node.js) or WhatsApp Business API. Config: {"mode": "baileys", "auth_dir": ".whatsapp-auth"}
Signal adapter Uses signal-cli subprocess with JSON-RPC. Config: {"account": "+1234567890"}
Channel router ChannelRouter dispatches inbound messages to agents, routes replies back. Per-session serialization via KeyedAsyncQueue, optional debouncer, hooks
from clawagents import create_claw_agent, ChannelRouter
from clawagents.channels.telegram import TelegramAdapter
from clawagents.channels.whatsapp import WhatsAppAdapter

router = ChannelRouter(lambda: create_claw_agent("gpt-5-mini"))
router.register(TelegramAdapter())
router.register(WhatsAppAdapter())
await router.start_all({
    "telegram": {"bot_token": "123456:ABC..."},
    "whatsapp": {"mode": "baileys", "auth_dir": ".whatsapp-auth"},
})

v5.22.0 โ€” Tool Result Caching, Parameter Validation & ComposeTool

3 features inspired by ToolUniverse's tool management patterns:

Feature Description
Tool result caching LRU in-memory cache (ResultCacheManager) avoids redundant tool calls. Tools opt in with cacheable = True. Per-tool TTL overrides via result_cache.set_tool_ttl(). Built-in cacheable tools: read_file, grep, web_fetch. Default: 256 entries, 60s TTL
Parameter validation + coercion validate_tool_args() checks required params and type-matches before execution. Lenient coercion handles common LLM quirks: "42" โ†’ 42, "true" โ†’ True, JSON strings โ†’ objects/arrays. Enabled by default on ToolRegistry
ComposeTool create_compose_tool() chains multiple tools in a deterministic pipeline without an LLM in the loop. Lighter than sub-agents for predictable workflows. Steps receive previous results and a call_tool helper. Failures short-circuit with clear error messages

v5.21.0 โ€” Context Engine, Loop Detection & Compaction Overhaul

8 improvements inspired by the latest OpenClaw architecture:

Feature Description
Chunked compaction with retry Compaction now splits old messages into ~30K-token chunks, summarizes each separately with up to 3 retries (exponential backoff), and explicitly preserves file paths, function names, error messages, and commands verbatim
Better loop detection Result hashing detects "different args, same result" stalls; ping-pong detection catches Aโ†’Bโ†’Aโ†’B oscillation; global circuit breaker hard-stops at 30 no-progress calls
Context pruning (soft-trim) New _soft_trim_messages runs at 60% context usage (before the 75% compaction trigger). Trims old tool results >1000 chars, removes duplicates, and stubs stale image data
Skill eligibility gating Skills can declare requires: in YAML frontmatter (os, bins, env). Ineligible skills are filtered at load time
Skill prompt budget Max 20 skills / 4000 chars injected into the system prompt. Full list accessible via list_skills
Control token sanitization Strips leaked model control tokens (<|assistant|>, <|endoftext|>, full-width variants) from final output
Head+tail truncation Eviction fallback and content preview now use head+tail (preserving error messages at the end). Also fixes a bug where few-line, huge-character content bypassed preview truncation
Pluggable context engine New ContextEngine ABC with after_turn, compact, bootstrap, cleanup lifecycle hooks. DefaultContextEngine is a no-op pass-through. Registry: register_context_engine() / resolve_context_engine()

v5.20.4 โ€” Gemini MALFORMED_FUNCTION_CALL Retry

Feature Description
Gemini malformed FC retry When Gemini returns finish_reason=MALFORMED_FUNCTION_CALL with 0 parts (common with complex parallel tool calls), the provider now automatically retries with tool_config.mode=ANY instead of stopping the agent
Streaming + non-streaming Fix applied to both streaming (_stream_with_retry) and non-streaming (_request_once) code paths
Recursion guard _malformed_retry flag prevents infinite retry loops if mode=ANY also fails

v5.20.3 โ€” GPT-5 Temperature Corrections

Feature Description
GPT-5-nano temperature Live API tests confirmed gpt-5-nano requires temperature=1 (not 0). Fixed in _FIXED_TEMPERATURE_MODELS

v5.20.0 โ€” Temperature & Compaction Fixes

Feature Description
Temperature fix GPT-5 models no longer forced to temperature=1.0. Only o-series models (o1, o3, o4-mini) retain the fixed override. This restores deterministic behavior when TEMPERATURE=0 is set
Compaction overhaul Context compaction no longer causes the agent to "forget" what it was doing. Five improvements: (1) RECENT_MESSAGES_TO_KEEP increased from 6 โ†’ 20, (2) tool call/result pairs are never split, (3) summary prompt now includes original task + structured preservation instructions, (4) compacted summary inserted as role="user" with [System โ€” Compacted History] prefix instead of role="assistant", (5) text log for summarization includes structured [TOOL CALLS] and [TOOL RESULT] markers
Debug cleanup All development instrumentation removed from production code

v5.19.0 โ€” Anthropic Provider, Security, Architecture Overhaul

Feature Description
Anthropic/Claude provider First-class support for Claude models via ANTHROPIC_API_KEY. Install with pip install clawagents[anthropic]
Optional Gemini google-genai is now an optional dependency. Install with pip install clawagents[gemini] or pip install clawagents[all]
py.typed + __version__ PEP 561 type stub marker and clawagents.__version__ export for downstream tools
Lazy config loading No more module-level side effects โ€” .env discovery happens on first load_config() call
Lazy Path.cwd() All module-level Path.cwd() calls replaced with lazy functions โ€” safe for import from any directory
Gateway authentication GATEWAY_API_KEY env var enables Bearer token auth on POST endpoints
CORS support Gateway now supports GATEWAY_CORS_ORIGINS for cross-origin requests
Improved blocked patterns Expanded dangerous command detection with regex matching
API key masking clawagents --doctor now masks keys (shows ********...last4)
Azure detection New OPENAI_API_TYPE=azure env var for explicit Azure OpenAI configuration
Global timeout --timeout N CLI flag and CLAW_TIMEOUT env var for agent run time limits
--verbose / --quiet CLI flags for controlling output verbosity
--prune-trajectories N Delete trajectory files older than N days
Lesson export/import export_lessons() / import_lessons() for sharing lessons between projects
Trajectory pruning prune_trajectories(max_age_days) utility function
pydantic-settings Now properly listed as a dependency (was missing)
pyproject.toml metadata Added license, authors, classifiers, URLs, optional dependency groups
New tests Tests for _repair_json, trajectory recorder, config module

v5.18.0 โ€” Doctor, Trajectory Inspector & Config Improvements

Feature Description
clawagents --doctor New diagnostic command checks .env discovery, API keys, active model, LLM settings, PTRL flags, local endpoint reachability, trajectory history, and AGENTS.md presence
clawagents --trajectory [N] Inspect the last N run summaries: score, quality, failures, judge verdict, duration โ€” human-readable trajectory output
Startup banner Every --task and --serve now prints provider=X model=Y env=Z ptrl=... for instant visibility into active config
CLAWAGENTS_ENV_FILE New env var to explicitly point to a .env file path. Priority: CLAWAGENTS_ENV_FILE > cwd/.env > cwd/../.env. Useful for CI, Docker, multi-project
Publish hygiene GitHub releases no longer include .clawagents/, .pytest_cache/, logs, or other runtime artifacts
Config/docs consistency tests 6 pytest tests verify every EngineConfig field appears in .env.example and README.md
--port in TypeScript Gateway server port now configurable via --port N in TypeScript CLI

v5.17.0 โ€” Quick Start Scaffold & Examples

Feature Description
clawagents --init New CLI command scaffolds a starter project in the current directory: generates .env (with all providers commented out), run_agent.py (ready-to-run starter script with 5 provider options), and AGENTS.md (memory template)
clawagents --help Shows usage with examples, quick start instructions
clawagents --task Run a single task from the command line
clawagents --serve Start the HTTP gateway server from CLI
Examples directory 8 ready-to-run example scripts: OpenAI, Gemini, Azure, Ollama, vLLM, Bedrock, custom tools, and multi-sample comparison
README overhaul New "30-Second Quick Start" section, examples table, clearer onboarding flow

v5.16.0 โ€” LLM-as-Judge & Thinking Token Preservation

Feature Description
G. LLM-as-Judge verification After each run (when learn=True), a separate, focused LLM call evaluates whether the task was actually accomplished. Returns a 0-3 score with justification โ€” more reliable than heuristic scoring. Results stored as judge_score and judge_justification on RunSummary
H. Thinking token preservation Models like Qwen3 and DeepSeek that emit <think>...</think> blocks are now fully supported. Thinking content is extracted before tool-call parsing, preserved on messages and trajectory records, and stripped from visible output. Available via strip_thinking_tokens() utility

v5.15.0 โ€” Deterministic Verification & GRPO-Inspired Comparison

Feature Description
A. Deterministic rewards Tool execution results (exit codes, test pass/fail counts) are now used as objective ground truth for scoring, replacing pure LLM self-assessment. Each turn and run summary includes deterministic_score and verified_score fields
B. Multi-sample comparison New agent.compare(task, n_samples=3) method runs the same task N times and picks the best result using objective scoring โ€” inspired by SkyRL's Group Relative Policy Optimization (GRPO)
C. Task-type-aware verification Auto-detects task type (coding/file/search/refactor/general) and applies type-specific verifiers. Coding tasks use test results; file tasks check write success; refactoring checks edits + tests
D. Progressive context caching System prompt token count is computed once and cached, avoiding redundant re-counting on every turn. Logged at startup for budget visibility
E. RFT-ready transitions Each trajectory now exports {run_id}_rft.json with (observation, action, reward, done) tuples per step โ€” structured for future Rejection Fine-Tuning pipelines
F. Adaptive rethink threshold Rethink trigger threshold now adjusts dynamically: complex tasks (coding/refactor) get more patience (threshold=5), simple tasks (search/file) trigger sooner (threshold=3), and late in runs threshold drops to minimum (2)

v5.14.0 โ€” SkyRL-Inspired PTRL Improvements

Feature Description
๐Ÿšฆ Quality gate for lesson extraction Lessons only extracted from runs with mixed outcomes (both successes and failures). Zero-variance runs (all-success or all-failure with no contrast) are skipped โ€” inspired by SkyRL's GRPO dynamic sampling
โฐ Lesson staleness decay Each lesson block is now timestamped + model-tagged (@timestamp [model]). load_lessons(max_age_s=N) filters out stale lessons. Prevents prompt pollution from outdated advice
๐Ÿ”ค Format vs. logic failure classification Every failed tool call is classified as "format" (bad JSON, wrong params) or "logic" (valid call, wrong approach). Rethink messages now include format-specific or strategy-specific guidance
๐Ÿ“Š Per-step reward attribution Each TurnRecord now includes observation_context (what the agent saw before deciding), productivity_score (-1.0 to 1.0), and failure_type per tool call. RunSummary adds format_failures, logic_failures, has_mixed_outcomes, and finish_reason
๐Ÿง  Enhanced self-analysis prompt Post-run LLM analysis now receives failure type breakdown and productivity scores for targeted lesson extraction

v5.13.0 โ€” Prompt-Time Reinforcement Learning (PTRL)

Feature Description
๐Ÿง  PTRL: Post-run self-analysis After each run, the LLM reviews its own trajectory and extracts 2-5 actionable lessons, saved to .clawagents/lessons.md
๐Ÿ“– PTRL: Pre-run lesson injection On subsequent runs, stored lessons are injected into the system prompt so the agent avoids past mistakes
๐Ÿ”„ PTRL: Enhanced mid-run rethink When consecutive failures trigger a rethink, relevant past lessons are included in the rethink message
๐ŸŽ›๏ธ learn flag / CLAW_LEARN env Opt-in via learn=True or CLAW_LEARN=1. Automatically enables trajectory logging
๐Ÿ“ Default context_window โ†’ 1,000,000 Increased from 128,000 to support modern large-context models
๐Ÿ”ง macOS sandbox symlink fix LocalBackend now resolves symlinks at init (fixes /var โ†’ /private/var on macOS)
โœ… All 150 tests passing Fixed 48 pre-existing test failures (sandbox path traversal, LLMMessage subscript, mock assertions)

v5.12.1 โ€” Streamlit / Jupyter Compatibility

Feature Description
๐Ÿ”ง Signal handler fix add_signal_handler now catches RuntimeError in addition to NotImplementedError/OSError, fixing crashes in Streamlit, Jupyter, and other non-main-thread environments

v5.12.0 โ€” Gemini 3 Thought Signature Support

Feature Description
๐Ÿง  thought_signature preservation Gemini 3 thinking models (e.g. gemini-3-flash-preview) require thought and thought_signature fields to be echoed back during multi-turn function calling. ClawAgents now captures the full response parts and replays them verbatim, preventing 400 errors.
๐Ÿ”„ gemini_parts field New optional field on LLMMessage and LLMResponse carries raw Gemini response parts through the conversation history. Used automatically โ€” no user action required.

v5.11.0 โ€” Configurable Limits

Feature Description
๐Ÿ”ข max_iterations Now settable at construction or via MAX_ITERATIONS env (default 200, was hardcoded in caller)
๐Ÿ“ preview_chars Tool-output preview length configurable via CLAW_PREVIEW_CHARS env (default 120)
๐Ÿ“„ response_chars Response text length in trajectory records via CLAW_RESPONSE_CHARS env (default 500)
โš™๏ธ Priority Explicit param > env var > default for all three

v5.10.0 โ€” Discrete Reward Bands & Weighted Scoring

Feature Description
๐ŸŽฏ Discrete reward bands Run scores mapped to -1 โ€ฆ +3 bands (inspired by CUDA-Agent PPO reward shaping)
โš–๏ธ Weighted execution scoring execute, shell, run_code weighted 2ร— higher than generic tools
๐Ÿท๏ธ Run quality grading Each run classified as clean, noisy, or failed for trajectory filtering
๐Ÿ›ก๏ธ Gameable tool exclusion think, todolist, use_skill, etc. excluded from scoring to prevent reward hacking

v5.9.0 โ€” Trajectory Logging & Rethink

Feature Description
๐Ÿ“Š Trajectory logging Structured recording of every turn, tool call, and outcome to runs.jsonl
๐Ÿ”„ Consecutive-failure rethink After 3 consecutive meaningful failures, injects a system "rethink" prompt
๐ŸŽ›๏ธ Opt-in flags trajectory=True / CLAW_TRAJECTORY=1 and rethink=True / CLAW_RETHINK=1

v5.8.0 โ€” JSON Resilience

Feature Description
๐Ÿ”ง JSON repair _repair_json() utility fixes truncated JSON from hitting max_completion_tokens
๐Ÿ” Truncated JSON retry Detects incomplete JSON tool calls and prompts the LLM to resend

v5.7.0 โ€” Model-Specific Temperature

Feature Description
๐ŸŒก๏ธ Fixed-temperature models Reasoning models (o-series, gpt-5, gpt-5-mini, gpt-5-turbo) auto-override to temperature=1.0. Non-reasoning models (gpt-5-nano, gpt-5-micro, gpt-4o) respect configured temperature
๐ŸŒก๏ธ Configurable temperature TEMPERATURE env var + temperature parameter on create_claw_agent

v5.6.0 โ€” LLM Parameter Fixes

Feature Description
๐Ÿ”‘ max_completion_tokens OpenAI calls now use max_completion_tokens (replacing deprecated max_tokens)
๐Ÿ”‘ max_output_tokens Gemini calls now pass max_output_tokens correctly
โš™๏ธ Config priority Explicit param > .env > default โ€” no more shadowing of env values

v5.5.0 โ€” Foundation

Feature Description
๐Ÿ”Œ Pluggable Sandbox SandboxBackend protocol with LocalBackend + InMemoryBackend
๐ŸŒ Gateway Server FastAPI server with SSE streaming and 4-lane queue
๐Ÿ—‚๏ธ Advanced FS Tools tree, diff, insert_lines
๐Ÿง  Think Tool Structured reasoning without side effects
๐ŸŒ Web Fetch URL fetching with HTML cleanup
๐Ÿ’ฌ Ask User Interactive stdin-based input
๐Ÿ“œ History Offloading Full audit trail preserved after compaction
๐Ÿ”’ Tool Access Control block_tools() / allow_only_tools() at runtime
๐Ÿ’‰ Context Injection inject_context() hook for every LLM call
โœ‚๏ธ Output Truncation truncate_output() to cap tool output size

Trajectory Logging & RL-Inspired Scoring

ClawAgents includes an optional trajectory system inspired by reinforcement learning techniques from CUDA-Agent and OpenClaw-RL. Enable it with trajectory=True or CLAW_TRAJECTORY=1.

What gets logged

Every agent run records:

  • Turn-level data: tool calls, arguments, success/failure, output previews
  • Weighted turn scores: execution tools (shell, code runners) weighted 2ร— higher than generic tools
  • Run summary: total turns, tool calls, successes/failures, elapsed time

Discrete reward bands

Each run receives a score from -1 to +3:

Score Meaning
+3 All tools succeeded, task completed cleanly
+2 Minor hiccups but overall success
+1 Partial success with some failures
0 Inconclusive โ€” mixed results
-1 Majority of tool calls failed

Quality grading

Runs are classified for downstream filtering:

Quality Criteria
clean Score โ‰ฅ 2 and โ‰ค 2 mid-run failures
noisy Score โ‰ฅ 0 but too many mid-run failures
failed Score < 0

Anti-gaming protections

Tools like think, todolist, use_skill, list_skills, and update_todo are excluded from scoring โ€” they can't inflate success rates.

Consecutive-failure rethink

With rethink=True or CLAW_RETHINK=1, the agent monitors tool outcomes in real-time. After 3 consecutive meaningful failures, it injects a system message:

"You have had 3 consecutive tool failures. Stop and rethink your approach before continuing."

This simple mechanism prevents the agent from spiraling into repeated failed attempts.

Output

Run summaries are appended to .clawagents/trajectories/runs.jsonl:

{
  "run_id": "a1b2c3d4",
  "model": "gpt-5-mini",
  "total_turns": 8,
  "tool_calls": 12,
  "successes": 10,
  "failures": 2,
  "run_score": 2,
  "quality": "clean",
  "elapsed_ms": 45230,
  "turns": [...]
}

Roadmap

  • Docker sandbox backend (protocol ready)
  • Semantic browser automation (accessibility tree)
  • Prompt caching (Anthropic-style)
  • Persistent memory learning from trajectory data (advanced โ€” RFT-style rule extraction)
  • Post-run self-analysis + lesson extraction โœ… (v5.13 โ€” PTRL)
  • Pre-run lesson injection โœ… (v5.13 โ€” PTRL)
  • Enhanced mid-run rethink with past lessons โœ… (v5.13 โ€” PTRL)
  • Trajectory logging + discrete reward bands โœ… (v5.9โ€“5.10)
  • Consecutive-failure rethink injection โœ… (v5.9)
  • Weighted execution scoring + quality grading โœ… (v5.10)
  • JSON repair + truncated JSON retry โœ… (v5.8)
  • Model-specific temperature override โœ… (v5.7)
  • Configurable temperature / max_completion_tokens โœ… (v5.6)
  • Pluggable sandbox backend โœ… (v5.5)
  • Lane-based queue serialization โœ… (v5.5)
  • Skill progressive disclosure โœ… (v5.5)
  • Gateway HTTP server โœ… (v5.5)

License

MIT


Built with ๐Ÿฆž by the ClawAgents team

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

clawagents-6.3.0.tar.gz (270.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

clawagents-6.3.0-py3-none-any.whl (213.1 kB view details)

Uploaded Python 3

File details

Details for the file clawagents-6.3.0.tar.gz.

File metadata

  • Download URL: clawagents-6.3.0.tar.gz
  • Upload date:
  • Size: 270.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.8

File hashes

Hashes for clawagents-6.3.0.tar.gz
Algorithm Hash digest
SHA256 905dd4f94a2de064a6c7752ded7f3ca1d5f50397522ac4c69a5c23940ddad97a
MD5 a094cd73c1db830fabf00b75fda97cb2
BLAKE2b-256 1da90f6087516cb4de7370027e843efc5e6ed4d94ea12ee8a9779bc565bd841a

See more details on using hashes here.

File details

Details for the file clawagents-6.3.0-py3-none-any.whl.

File metadata

  • Download URL: clawagents-6.3.0-py3-none-any.whl
  • Upload date:
  • Size: 213.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.8

File hashes

Hashes for clawagents-6.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 aab49050d497c891548ef2b8ff13db7d4a87322a84454eb411bfda03856322ed
MD5 1778e7f5817d22ae7a8e86b394851a2a
BLAKE2b-256 65e84657799bc2ed8fd86bd5b79d770aee7c54cf5cdcd2a8bed6860e1d8e6e04

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page