Skip to main content

Open-source reliability testing for AI agent tool chains. Catch cascading failures before production.

Project description

๐Ÿ›ก๏ธ ToolGuard

Reliability testing for AI agent tool chains.

Catch cascading failures before production. Make agent tool calling as predictable as unit tests made software reliable.

Python License Tests Integrations


๐Ÿง  What ToolGuard Actually Solves

Right now, developers don't deploy AI agents because they are fundamentally unstable. They crash.

There are two layers to AI:

  1. Layer 1: Intelligence (evals, reasoning, accurate answers)
  2. Layer 2: Execution (tool calls, chaining, JSON payloads, APIs)

ToolGuard does not test Layer 1. We do not care if your AI is "smart" or makes good decisions. That is what eval frameworks are for.

ToolGuard mathematically proves Layer 2. We solve the problem of agents crashing at 3 AM because the LLM hallucinated a JSON key, passed a string instead of an int, or an external API timed out.

"We don't make AI smarter. We make AI systems not break."


๐Ÿš€ Zero Config โ€” Try It in 60 Seconds

pip install py-toolguard
toolguard run my_agent.py

That's it. ToolGuard auto-discovers your tools, fuzzes them with hallucination attacks (nulls, type mismatches, missing fields), and prints a reliability report. Zero config needed.

๐Ÿš€ Auto-discovered 3 tools from my_agent.py
   โ€ข fetch_price (2 params)
   โ€ข calculate_position (3 params)
   โ€ข generate_alert (2 params)

๐Ÿงช Running 42 fuzz tests...

โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
โ•‘  Reliability Score: my_agent                                 โ•‘
โ• โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฃ
โ•‘  Score:       64.3%                                          โ•‘
โ•‘  Risk Level: ๐ŸŸ  HIGH                                         โ•‘
โ•‘  Deploy:     ๐Ÿšซ BLOCK                                        โ•‘
โ• โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•ฃ
โ•‘  โš ๏ธ  Top Risk: Null values propagating through chain         โ•‘
โ•‘  โš ๏ธ  Bottleneck Tools:                                       โ•‘
โ•‘    โ†’ fetch_price       (50% success)                         โ•‘
โ•‘    โ†’ generate_alert    (42% success)                         โ•‘
โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•

๐Ÿ’ก fetch_price: Add null check for 'ticker' โ€” LLM hallucinated None
๐Ÿ’ก generate_alert: Field 'severity' expects int, got str from upstream tool

Or with Python:

from toolguard import create_tool, test_chain, score_chain

@create_tool(schema="auto")
def parse_csv(raw_csv: str) -> dict:
    lines = raw_csv.strip().split("\n")
    headers = lines[0].split(",")
    records = [dict(zip(headers, line.split(","))) for line in lines[1:]]
    return {"headers": headers, "records": records, "row_count": len(records)}

report = test_chain(
    [parse_csv],
    base_input={"raw_csv": "name,age\nAlice,30\nBob,35"},
    test_cases=["happy_path", "null_handling", "malformed_data", "type_mismatch", "missing_fields"],
)

score = score_chain(report)
print(score.summary())

๐Ÿค– How ToolGuard is Different

Most testing tools (LangSmith, Promptfoo) test your agent by sending prompts to a live LLM. It is slow, expensive, and non-deterministic.

ToolGuard does NOT use an LLM to run its tests.

When you decorate a function with @create_tool(schema="auto"), ToolGuard reads your Python type hints and automatically generates a Pydantic schema. It then uses that schema to know exactly which fields to break, which types to swap, and which values to null โ€” no manual configuration needed.

It acts like a deterministic fuzzer for AI tool execution, programmatically injecting the exact types of bad data that an LLM would accidentally generate in production:

  1. Missing dictionary keys
  2. Null values propagating down the chain
  3. str instead of int
  4. Massive 10MB payloads to stress your server
  5. Extra/unexpected fields in JSON

ToolGuard doesn't test if your AI is smart. It tests if your Python code is bulletproof enough to survive when your AI does something stupid โ€” running in 1 second and costing $0 in API fees.


Features

๐Ÿ›ก๏ธ Layer-2 Security Firewall (V3.0)

ToolGuard features an impenetrable execution-layer security framework protecting production servers from critical LLM exploits.

  • Human-in-the-Loop Risk Tiers: Mark destructive tools with @create_tool(risk_tier=2). ToolGuard mathematically intercepts these calls and natively streams terminal approval prompts before execution, gracefully protecting asyncio event loops and headless daemon environments.
  • Recursive Prompt Injection Fuzzing: The test_chain fuzzer automatically injects [SYSTEM OVERRIDE] execution payloads into your pipelines. A bespoke recursive depth-first memory parser scans internal custom object serialization, byte arrays, and .casefold() string mutations to eliminate zero-day blind spots.
  • Golden Traces (DAG Instrumentation): With two lines of code (with TraceTracker() as trace:), ToolGuard natively intercepts Python contextvars to construct a chronologically perfect Directed Acyclic Graph of all tools orchestrated by LangChain, CrewAI, Swarm, and AutoGen.
  • Non-Deterministic Verification: Punishing an AI for self-correcting is an anti-pattern. Developers use trace.assert_sequence(["auth", "refund"]) to mathematically enforce mandatory compliance checkpoints while permitting the LLM complete freedom to autonomously select supplementary network tools.

๐Ÿ” Schema Validation

Automatic Pydantic input/output validation from type hints. No manual schemas needed.

@create_tool(schema="auto")
def fetch_price(ticker: str) -> dict:
    ...

๐Ÿ”— Chain Testing

Test multi-tool chains against 8 edge-case categories: null handling, type mismatches, missing fields, malformed data, large payloads, and more.

report = test_chain(
    [fetch_price, calculate_position, generate_alert],
    base_input={"ticker": "AAPL"},
    test_cases=["happy_path", "null_handling", "type_mismatch"],
)

โšก Async Support

Works with both def and async def tools transparently. No special flags needed.

@create_tool(schema="auto")
async def fetch_from_api(url: str) -> dict:
    async with httpx.AsyncClient() as client:
        resp = await client.get(url)
        return resp.json()

# Same API โ€” ToolGuard handles the async automatically
report = test_chain([fetch_from_api, process_data], assert_reliability=0.95)

๐Ÿฆ‡ Immersive Live Dashboard

When testing locally, you don't have to stare at basic print logs. By passing --dashboard, ToolGuard launches a stunning, high-contrast, dark-mode terminal UI (built on Textual).

toolguard run my_agent.py --dashboard

It streams live, concurrent fuzzing results as they happen, calculates metrics in realtime, and tracks exactly which functions crash under payload injectionโ€”all encapsulated in a dedicated hacker-style "Mission Control" interface.

๐Ÿ“Š Reliability Scoring

Quantified trust with risk levels and deployment gates.

score = score_chain(report)
if score.deploy_recommendation.value == "BLOCK":
    sys.exit(1)  # CI/CD gate

โช Local Crash Replay

When a remote tool crashes in production or tests, ToolGuard automatically dumps the structured JSON payload. You can instantly replay the exact crashing state locally to view the stack trace.

toolguard run my_agent.py --dump-failures
toolguard replay .toolguard/failures/fail_1774068587_0.json

๐ŸŽฏ Edge-Case Test Coverage

ToolGuard gives you PyTest-style coverage metrics. Instead of arbitrary line-coverage, it calculates exactly what percentage of the 8 known LLM hallucination categories (nulls, missing fields, type mismatches, etc.) your tests successfully covered, and lists what is untested.

โšก The Minimal API

For rapid Jupyter Notebook testing and quick demos, use the highly portable 1-line Python wrapper.

from toolguard import quick_check

quick_check(my_agent_function, test_cases=["happy_path", "null_handling"])

๐Ÿ”„ Retry & Circuit Breaker

Production-grade resilience patterns built-in.

from toolguard import with_retry, RetryPolicy, CircuitBreaker, with_circuit_breaker

@with_retry(RetryPolicy(max_retries=3, backoff_base=0.5))
def call_api(data: dict) -> dict: ...

breaker = CircuitBreaker(failure_threshold=5, reset_timeout=60)

@with_circuit_breaker(breaker)
def call_flaky_service(data: dict) -> dict: ...

๐Ÿ–ฅ๏ธ CLI

toolguard run my_agent.py                          # Zero-config auto-test
toolguard run my_agent.py --dashboard              # ๐Ÿฆ‡ Live immersive TUI control center
toolguard test --chain my_chain.yaml               # YAML-based chain test
toolguard test --chain my_chain.yaml --html out.html  # HTML report
toolguard test --chain my_chain.yaml --junit-xml out.xml  # JUnit XML for CI
toolguard badge                                    # Generate reliability badge
toolguard check --tools my_tools.py                # Check compatibility
toolguard observe --tools my_tools.py              # View tool stats
toolguard init --name my_project                   # Scaffold project

๐Ÿ”Œ Native Framework Integrations

ToolGuard works with your existing tools. No rewrites needed โ€” just wrap and fuzz.

# ๐Ÿฆœ๐Ÿ”— LangChain
from langchain_core.tools import tool
from toolguard import test_chain
from toolguard.integrations.langchain import guard_langchain_tool

@tool
def search(query: str) -> str:
    """Search the web."""
    return f"Results for {query}"

guarded = guard_langchain_tool(search)
report = test_chain([guarded], base_input={"query": "hello"})
# ๐Ÿš€ CrewAI
from crewai.tools import BaseTool
from toolguard.integrations.crewai import guard_crewai_tool

guarded = guard_crewai_tool(my_crew_tool)
# ๐Ÿฆ™ LlamaIndex
from llama_index.core.tools import FunctionTool
from toolguard.integrations.llamaindex import guard_llamaindex_tool

llama_tool = FunctionTool.from_defaults(fn=my_function)
guarded = guard_llamaindex_tool(llama_tool)
# ๐Ÿค– Microsoft AutoGen
from autogen_core.tools import FunctionTool
from toolguard.integrations.autogen import guard_autogen_tool

autogen_tool = FunctionTool(my_function, name="my_tool", description="...")
guarded = guard_autogen_tool(autogen_tool)
# ๐Ÿ OpenAI Swarm
from swarm import Agent
from toolguard.integrations.swarm import guard_swarm_agent

agent = Agent(name="My Agent", functions=[func_a, func_b])
guarded_tools = guard_swarm_agent(agent)  # Returns list of GuardedTools
# โšก FastAPI
from toolguard.integrations.fastapi import as_fastapi_tool

guarded = as_fastapi_tool(my_endpoint_function)
# ๐ŸŒ OpenAI Function Calling
from toolguard.integrations.openai_func import from_openai_function

openai_schema = {"type": "function", "function": {"name": "my_func", "parameters": {}}}
guarded = from_openai_function(openai_schema, my_python_backend_function)

All 7 integrations tested with real pip-installed libraries โ€” not mocks, not duck-types.

๐Ÿงน 100% Authentic Testing

ToolGuard's integration suite runs exclusively against the actual PyPI codebase implementations of LangChain, AutoGen, Swarm, FastAPI, and CrewAI. There is absolutely no faked compatibilityโ€”it is mathematically proven against the live libraries. We deleted all fake "mock" tests to ensure the standard of reliability is pristine.


๐Ÿ—๏ธ CI/CD Integration

GitHub Action

Add to any repo โ€” auto-comments on PRs with reliability scores:

# .github/workflows/toolguard.yml
name: ToolGuard Reliability Check
on: [pull_request]

jobs:
  reliability:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: Harshit-J004/toolguard@main
        with:
          script_path: src/agent.py
          github_token: ${{ secrets.GITHUB_TOKEN }}
          reliability_threshold: "0.95"

PR Comment Example:

๐Ÿšจ ToolGuard Reliability Check (BLOCKED)

Chain: my_agent Reliability Score: 64.3% (Threshold: 95%)

Warning: The PR introduces agent fragility. 3 tools will crash if the LLM hallucinates null.

JUnit XML (Jenkins / GitLab CI)

toolguard test --chain config.yaml --junit-xml results.xml

Generates standard <testsuites> XML that Jenkins, GitLab CI, and CircleCI parse natively.

Reliability Badges

toolguard badge

Generates shields.io badge markdown for your README:

ToolGuard Reliability


๐Ÿ“ก Observability & Production Alerts

1. Zero-Latency Hallucination Alerts

Catch "LLM drift" in production. When an LLM hallucinates a bad JSON payload, ToolGuard instantly fires a background alert to your team without slowing down the agent:

import toolguard

toolguard.configure_alerts(
    slack_webhook_url="https://hooks.slack.com/...",
    discord_webhook_url="https://discord.com/api/webhooks/...",
    datadog_api_key="my-api-key",
    generic_webhook_url="https://my-dashboard.com/api/ingest"
)

Built with background thread pools so network requests never block the LLM runtime.

2. OpenTelemetry Tracing

Tracing works out of the box with Jaeger, Zipkin, Datadog, and more.

from toolguard.core.tracer import init_tracing, trace_tool

init_tracing(service_name="my-agent")

@trace_tool
def my_tool(data: dict) -> dict: ...

Architecture

toolguard/
โ”œโ”€โ”€ core/
โ”‚   โ”œโ”€โ”€ validator.py      # @create_tool decorator + GuardedTool (sync + async)
โ”‚   โ”œโ”€โ”€ chain.py          # Chain testing engine (8 test types, async-aware)
โ”‚   โ”œโ”€โ”€ schema.py         # Auto Pydantic model generation
โ”‚   โ”œโ”€โ”€ scoring.py        # Reliability scoring + deploy gates
โ”‚   โ”œโ”€โ”€ report.py         # Failure analysis + suggestions
โ”‚   โ”œโ”€โ”€ errors.py         # Exception hierarchy + correlation IDs
โ”‚   โ”œโ”€โ”€ retry.py          # RetryPolicy + CircuitBreaker
โ”‚   โ”œโ”€โ”€ tracer.py         # OpenTelemetry integration
โ”‚   โ””โ”€โ”€ compatibility.py  # Schema conflict detection
โ”œโ”€โ”€ alerts/
โ”‚   โ”œโ”€โ”€ manager.py        # Abstract ThreadPool dispatcher
โ”‚   โ”œโ”€โ”€ slack.py          # Block Kit formatting
โ”‚   โ”œโ”€โ”€ discord.py        # Embed formatting
โ”‚   โ””โ”€โ”€ datadog.py        # HTTP Metrics + Events sink
โ”œโ”€โ”€ cli/
โ”‚   โ””โ”€โ”€ commands/         # run, test, check, observe, badge, init
โ”œโ”€โ”€ reporters/
โ”‚   โ”œโ”€โ”€ console.py        # Rich terminal output
โ”‚   โ”œโ”€โ”€ html.py           # Standalone HTML reports
โ”‚   โ”œโ”€โ”€ junit.py          # JUnit XML for Jenkins/GitLab CI
โ”‚   โ””โ”€โ”€ github.py         # GitHub PR auto-commenter
โ”œโ”€โ”€ integrations/
โ”‚   โ”œโ”€โ”€ langchain.py      # LangChain adapter
โ”‚   โ”œโ”€โ”€ crewai.py         # CrewAI adapter
โ”‚   โ”œโ”€โ”€ llamaindex.py     # LlamaIndex adapter
โ”‚   โ”œโ”€โ”€ autogen.py        # Microsoft AutoGen adapter
โ”‚   โ”œโ”€โ”€ swarm.py          # OpenAI Swarm adapter
โ”‚   โ”œโ”€โ”€ fastapi.py        # FastAPI middleware
โ”‚   โ””โ”€โ”€ openai_func.py    # OpenAI function calling export
โ”œโ”€โ”€ tests/                # 50 tests (sync + async + integration)
โ”œโ”€โ”€ integration_tests/    # Real-library integration tests
โ”œโ”€โ”€ fuzz_targets/         # Integration fuzz scripts (LangChain, CrewAI, AutoGen, etc.)
โ””โ”€โ”€ examples/
    โ”œโ”€โ”€ test_alerts.py              # Phase 4 webhook crash simulation
    โ”œโ”€โ”€ weather_chain/              # Working 3-tool example
    โ””โ”€โ”€ demo_failing_chain/         # Intentionally buggy (aha moment)

Why ToolGuard?

Without ToolGuard With ToolGuard
Failure detection Stack trace at 3 AM Caught before deploy
Root cause "TypeError in line 47" "Tool A returned null for 'price'"
Fix guidance None "Add default value OR validate response"
Confidence "It works on my machine" "92% reliability, LOW risk"
CI/CD Manual testing toolguard run in your pipeline
Cost $0.10/test (LLM calls) $0 (deterministic fuzzing)
Speed 30s (API roundtrips) <1s (local execution)

Tech Stack

Component Technology Why
Core Language Python 3.11 - 3.13 Agent ecosystem standard
Schema Validation Pydantic v2 3.5ร— faster than JSON Schema
Async Native asyncio Enterprise-grade concurrency
Testing pytest (50 tests) CI/CD native
Observability OpenTelemetry Vendor-neutral
CLI Click + Rich Beautiful terminal UX
CI/CD GitHub Actions + JUnit First-class pipeline support
Distribution PyPI pip install py-toolguard

License

MIT โ€” use it, fork it, ship it.


Built to make AI agents actually work in production.

GitHub ยท PyPI

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

py_toolguard-3.1.1.tar.gz (108.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

py_toolguard-3.1.1-py3-none-any.whl (82.8 kB view details)

Uploaded Python 3

File details

Details for the file py_toolguard-3.1.1.tar.gz.

File metadata

  • Download URL: py_toolguard-3.1.1.tar.gz
  • Upload date:
  • Size: 108.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for py_toolguard-3.1.1.tar.gz
Algorithm Hash digest
SHA256 56309e03fac75039c1474d979336f8e0a7820814202ff0a4a0846a372abd822b
MD5 e1aa9b409fdf3e90b77b5d44e8a7782b
BLAKE2b-256 7217ee14a230b7eaac44d25f9cb9ffb67bf433e09f5f6bf51ffa13c01ff48794

See more details on using hashes here.

Provenance

The following attestation bundles were made for py_toolguard-3.1.1.tar.gz:

Publisher: publish.yml on Harshit-J004/toolguard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file py_toolguard-3.1.1-py3-none-any.whl.

File metadata

  • Download URL: py_toolguard-3.1.1-py3-none-any.whl
  • Upload date:
  • Size: 82.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for py_toolguard-3.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 22a86e798a0dab1b539344b015e5fe8bee05832caaf47e0f5b6b3d1d2473b7e5
MD5 c31b1fac9a3b4cd3e685614be87c6fc4
BLAKE2b-256 24ebd6ce0ce0ccc9bdcf6c16d8b6f0e04603e8543a4a413b04ee951c61166b72

See more details on using hashes here.

Provenance

The following attestation bundles were made for py_toolguard-3.1.1-py3-none-any.whl:

Publisher: publish.yml on Harshit-J004/toolguard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page