Skip to main content

Runtime security for AI agents — policy engine, audit trail, and kill switch

Project description

agentguard

Runtime security for AI agents — policy engine, audit trail, and kill switch.

PyPI version Python versions License: MIT

Overview

AgentGuard gives AI agents production-grade guardrails:

  • 🛡️ Policy evaluation — check every tool call before execution
  • 📋 Audit trail — tamper-evident hash chain of every action
  • 🔴 Kill switch — instantly halt all agents
  • 🔍 Audit verification — cryptographically verify the audit chain
  • Zero dependencies — pure Python stdlib, works anywhere

Installation

pip install agentguard

Requires Python 3.8+. No external dependencies.


Quick Start

from agentguard import AgentGuard

guard = AgentGuard(api_key="ag_your_api_key")

# Evaluate an agent action before executing it
decision = guard.evaluate(
    tool="send_email",
    params={"to": "user@example.com", "subject": "Hello"}
)

if decision["result"] == "allow":
    print("Action allowed, risk score:", decision["riskScore"])
    # proceed with tool execution
elif decision["result"] == "block":
    print("Action blocked:", decision["reason"])
elif decision["result"] == "require_approval":
    print("Waiting for human approval...")
elif decision["result"] == "monitor":
    print("Action monitored (allowed but logged):", decision["reason"])

API Reference

AgentGuard(api_key, base_url=...)

Create a client instance.

guard = AgentGuard(
    api_key="ag_your_api_key",
    base_url="https://api.agentguard.tech"  # optional, default shown
)

evaluate(tool, params=None) → dict

Evaluate a tool call against your policy. Call this before every tool execution.

decision = guard.evaluate("read_file", {"path": "/data/report.csv"})
# Returns:
# {
#   "result": "allow",          # allow | block | monitor | require_approval
#   "riskScore": 5,             # 0-1000
#   "reason": "Matched allow-read rule",
#   "durationMs": 1.2,
#   "matchedRuleId": "allow-read"  # optional
# }

Integration pattern:

def safe_tool_call(tool_name, tool_func, **params):
    decision = guard.evaluate(tool_name, params)
    if decision["result"] in ("allow", "monitor"):
        return tool_func(**params)
    elif decision["result"] == "block":
        raise PermissionError(f"Blocked by policy: {decision['reason']}")
    elif decision["result"] == "require_approval":
        raise PermissionError("Awaiting human approval")

get_usage() → dict

Get usage statistics for your tenant.

usage = guard.get_usage()
print(usage)
# {
#   "requestsToday": 142,
#   "requestsThisMonth": 3891,
#   "plan": "pro",
#   "limits": { "requestsPerDay": 10000 }
# }

get_audit(limit=50, offset=0) → dict

Get audit trail events with pagination.

audit = guard.get_audit(limit=100, offset=0)
for event in audit["events"]:
    print(f"{event['timestamp']} | {event['tool']} | {event['decision']}")

kill_switch(active) → dict

Activate or deactivate the global kill switch.

# Emergency halt — stop all agents immediately
guard.kill_switch(True)

# Resume operations
guard.kill_switch(False)

verify_audit() → dict

Verify the cryptographic integrity of the audit hash chain.

result = guard.verify_audit()
if result["valid"]:
    print("Audit chain is intact")
else:
    print(f"Chain broken at event index: {result['invalidAt']}")

Complete Example — LangChain-style Agent

from agentguard import AgentGuard

guard = AgentGuard(api_key="ag_your_api_key")

def run_tool(name: str, func, **params):
    """Execute a tool with AgentGuard policy enforcement."""
    decision = guard.evaluate(name, params)
    
    result = decision["result"]
    if result == "block":
        raise PermissionError(f"Policy blocked {name}: {decision['reason']}")
    if result == "require_approval":
        raise PermissionError(f"Human approval required for {name}")
    
    # "allow" or "monitor" — proceed
    return func(**params)


# Your tools
def send_email(to: str, subject: str, body: str) -> str:
    # ... send the email
    return f"Email sent to {to}"

def read_file(path: str) -> str:
    with open(path) as f:
        return f.read()


# Use with policy enforcement
content = run_tool("read_file", read_file, path="/data/report.csv")
run_tool("send_email", send_email, to="boss@company.com", subject="Report", body=content)

Error Handling

from agentguard import AgentGuard

guard = AgentGuard(api_key="ag_your_key")

try:
    decision = guard.evaluate("dangerous_tool", {"target": "production_db"})
except RuntimeError as e:
    print(f"API error: {e}")
    # RuntimeError: AgentGuard API error: 401 Unauthorized

Links

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentguard_tech-0.1.0.tar.gz (5.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentguard_tech-0.1.0-py3-none-any.whl (5.8 kB view details)

Uploaded Python 3

File details

Details for the file agentguard_tech-0.1.0.tar.gz.

File metadata

  • Download URL: agentguard_tech-0.1.0.tar.gz
  • Upload date:
  • Size: 5.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for agentguard_tech-0.1.0.tar.gz
Algorithm Hash digest
SHA256 6535a4e20c16f6cd57476252bc48e7850270a6455a2bf854bc0e85a4b5bd2a2f
MD5 d9ecefb43e2bbe0d0f7f1c9dd52bd7ab
BLAKE2b-256 1c00138baa5552948d140e06f2e4c76bc4f5560b5d9aad276168273ea24e930e

See more details on using hashes here.

Provenance

The following attestation bundles were made for agentguard_tech-0.1.0.tar.gz:

Publisher: publish-pypi.yml on AgentGuard-tech/agentguard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file agentguard_tech-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for agentguard_tech-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 dfcc10dcd108a3d34dfc643a1f1b6276a2f619886826ff0064d15558d17fb9ba
MD5 a4420ec804e8075f4cf0a26780c6fb2a
BLAKE2b-256 dbd59a94731ed6c35876e30791dd7c8467280a0e38d272fb02af656bd82ff000

See more details on using hashes here.

Provenance

The following attestation bundles were made for agentguard_tech-0.1.0-py3-none-any.whl:

Publisher: publish-pypi.yml on AgentGuard-tech/agentguard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page