Runtime security for AI agents — policy engine, audit trail, and kill switch
Project description
agentguard-tech
Runtime security for AI agents — policy engine, audit trail, and kill switch.
Overview
AgentGuard gives AI agents production-grade guardrails:
- 🛡️ Policy evaluation — check every tool call before execution
- 📋 Audit trail — tamper-evident hash chain of every action
- 🔴 Kill switch — instantly halt all agents
- 🔍 Audit verification — cryptographically verify the audit chain
- ⚡ Zero dependencies — pure Python stdlib, works anywhere
Installation
pip install agentguard-tech
Requires Python 3.8+. No external dependencies.
Quick Start
from agentguard import AgentGuard
guard = AgentGuard(api_key="ag_your_api_key")
# Evaluate an agent action before executing it
decision = guard.evaluate(
tool="send_email",
params={"to": "user@example.com", "subject": "Hello"}
)
if decision["result"] == "allow":
print("Action allowed, risk score:", decision["riskScore"])
# proceed with tool execution
elif decision["result"] == "block":
print("Action blocked:", decision["reason"])
elif decision["result"] == "require_approval":
print("Waiting for human approval...")
elif decision["result"] == "monitor":
print("Action monitored (allowed but logged):", decision["reason"])
API Reference
AgentGuard(api_key, base_url=...)
Create a client instance.
guard = AgentGuard(
api_key="ag_your_api_key",
base_url="https://api.agentguard.tech" # optional, default shown
)
evaluate(tool, params=None) → dict
Evaluate a tool call against your policy. Call this before every tool execution.
decision = guard.evaluate("read_file", {"path": "/data/report.csv"})
# Returns:
# {
# "result": "allow", # allow | block | monitor | require_approval
# "riskScore": 5, # 0-1000
# "reason": "Matched allow-read rule",
# "durationMs": 1.2,
# "matchedRuleId": "allow-read" # optional
# }
Integration pattern:
def safe_tool_call(tool_name, tool_func, **params):
decision = guard.evaluate(tool_name, params)
if decision["result"] in ("allow", "monitor"):
return tool_func(**params)
elif decision["result"] == "block":
raise PermissionError(f"Blocked by policy: {decision['reason']}")
elif decision["result"] == "require_approval":
raise PermissionError("Awaiting human approval")
get_usage() → dict
Get usage statistics for your tenant.
usage = guard.get_usage()
print(usage)
# {
# "requestsToday": 142,
# "requestsThisMonth": 3891,
# "plan": "pro",
# "limits": { "requestsPerDay": 10000 }
# }
get_audit(limit=50, offset=0) → dict
Get audit trail events with pagination.
audit = guard.get_audit(limit=100, offset=0)
for event in audit["events"]:
print(f"{event['timestamp']} | {event['tool']} | {event['decision']}")
kill_switch(active) → dict
Activate or deactivate the global kill switch.
# Emergency halt — stop all agents immediately
guard.kill_switch(True)
# Resume operations
guard.kill_switch(False)
verify_audit() → dict
Verify the cryptographic integrity of the audit hash chain.
result = guard.verify_audit()
if result["valid"]:
print("Audit chain is intact")
else:
print(f"Chain broken at event index: {result['invalidAt']}")
create_webhook(url, events, secret=None) → dict
Register a webhook endpoint to receive AgentGuard events.
webhook = guard.create_webhook(
url="https://example.com/hooks/agentguard",
events=["action.blocked", "killswitch.activated"],
secret="my-signing-secret", # optional
)
print("Webhook ID:", webhook["id"])
list_webhooks() → dict
List all webhook subscriptions for your tenant.
result = guard.list_webhooks()
for wh in result["webhooks"]:
print(wh["id"], wh["url"])
delete_webhook(webhook_id) → dict
Delete a webhook subscription.
guard.delete_webhook("wh_abc123")
create_agent(name, policy_scope=None) → dict
Register a new agent with AgentGuard.
agent = guard.create_agent(
name="email-agent",
policy_scope={"allowedTools": ["send_email", "read_inbox"]}, # optional
)
print("Agent ID:", agent["id"])
list_agents() → dict
List all registered agents for your tenant.
result = guard.list_agents()
for a in result["agents"]:
print(a["id"], a["name"])
delete_agent(agent_id) → dict
Delete a registered agent.
guard.delete_agent("ag_abc123")
list_templates() → dict
List all available policy templates.
result = guard.list_templates()
for t in result["templates"]:
print(t["name"], t["description"])
get_template(name) → dict
Get a specific policy template by name.
template = guard.get_template("strict")
print(template["rules"])
apply_template(name) → dict
Apply a policy template to your tenant.
guard.apply_template("strict")
set_rate_limit(window_seconds, max_requests, agent_id=None) → dict
Create a rate limit rule.
# Tenant-wide: max 100 requests per 60 seconds
limit = guard.set_rate_limit(window_seconds=60, max_requests=100)
# Scoped to a specific agent
guard.set_rate_limit(window_seconds=60, max_requests=20, agent_id="ag_abc123")
list_rate_limits() → dict
List all rate limit rules for your tenant.
result = guard.list_rate_limits()
for rl in result["rateLimits"]:
print(rl["id"], rl["windowSeconds"], rl["maxRequests"])
delete_rate_limit(limit_id) → dict
Delete a rate limit rule.
guard.delete_rate_limit("rl_abc123")
get_cost_summary(agent_id=None, from_date=None, to_date=None, group_by=None) → dict
Get a cost summary for your tenant with optional filters.
# Overall summary
summary = guard.get_cost_summary()
# Filtered by agent and date range
summary = guard.get_cost_summary(
agent_id="ag_abc123",
from_date="2024-01-01",
to_date="2024-01-31",
group_by="day",
)
print(summary)
get_agent_costs() → dict
Get per-agent cost breakdown for your tenant.
costs = guard.get_agent_costs()
for entry in costs["agents"]:
print(entry["agentId"], entry["totalCost"])
get_dashboard_stats() → dict
Get high-level dashboard statistics.
stats = guard.get_dashboard_stats()
print(stats["requestsToday"], stats["blocksToday"])
get_dashboard_feed(since=None) → dict
Get the live activity feed for the dashboard.
# All recent events
feed = guard.get_dashboard_feed()
# Only events after a specific timestamp
feed = guard.get_dashboard_feed(since="2024-06-01T00:00:00Z")
for event in feed["events"]:
print(event["timestamp"], event["type"])
get_agent_activity() → dict
Get per-agent activity summary.
activity = guard.get_agent_activity()
for a in activity["agents"]:
print(a["agentId"], a["requests"], a["blocks"])
Complete Example — LangChain-style Agent
from agentguard import AgentGuard
guard = AgentGuard(api_key="ag_your_api_key")
def run_tool(name: str, func, **params):
"""Execute a tool with AgentGuard policy enforcement."""
decision = guard.evaluate(name, params)
result = decision["result"]
if result == "block":
raise PermissionError(f"Policy blocked {name}: {decision['reason']}")
if result == "require_approval":
raise PermissionError(f"Human approval required for {name}")
# "allow" or "monitor" — proceed
return func(**params)
# Your tools
def send_email(to: str, subject: str, body: str) -> str:
# ... send the email
return f"Email sent to {to}"
def read_file(path: str) -> str:
with open(path) as f:
return f.read()
# Use with policy enforcement
content = run_tool("read_file", read_file, path="/data/report.csv")
run_tool("send_email", send_email, to="boss@company.com", subject="Report", body=content)
Error Handling
from agentguard import AgentGuard
guard = AgentGuard(api_key="ag_your_key")
try:
decision = guard.evaluate("dangerous_tool", {"target": "production_db"})
except RuntimeError as e:
print(f"API error: {e}")
# RuntimeError: AgentGuard API error: 401 Unauthorized
Links
- 🌐 agentguard.tech
- 🎮 Live Demo
- 📦 GitHub
- 📘 npm SDK
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentguard_tech-0.7.1.tar.gz.
File metadata
- Download URL: agentguard_tech-0.7.1.tar.gz
- Upload date:
- Size: 13.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
63aaaf15ee50afddba79493091338544d6d895b86ec73794be6081774c05bcb0
|
|
| MD5 |
5a335344c5bdaa827e1492ba1a802050
|
|
| BLAKE2b-256 |
1887b3527ba1d3a0b673b3e51d69af968de628aa16f3e48120455dd8987472e3
|
Provenance
The following attestation bundles were made for agentguard_tech-0.7.1.tar.gz:
Publisher:
publish-pypi.yml on AgentGuard-tech/agentguard
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agentguard_tech-0.7.1.tar.gz -
Subject digest:
63aaaf15ee50afddba79493091338544d6d895b86ec73794be6081774c05bcb0 - Sigstore transparency entry: 1021577060
- Sigstore integration time:
-
Permalink:
AgentGuard-tech/agentguard@22a394e556696d81587f527d452462dee365e250 -
Branch / Tag:
refs/tags/python-v0.7.1 - Owner: https://github.com/AgentGuard-tech
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@22a394e556696d81587f527d452462dee365e250 -
Trigger Event:
push
-
Statement type:
File details
Details for the file agentguard_tech-0.7.1-py3-none-any.whl.
File metadata
- Download URL: agentguard_tech-0.7.1-py3-none-any.whl
- Upload date:
- Size: 11.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b0764d94fe872eecae3b8bb3e154d7f1be2fe3ac22b76b43e228a5ae12e5c5e3
|
|
| MD5 |
3a8a206d322ca9bf5dbf0549deabffc3
|
|
| BLAKE2b-256 |
1ee29a2f6c1c71dad97b170f2e61040bda973cc21a5b5943a4ccc19588cdf4fc
|
Provenance
The following attestation bundles were made for agentguard_tech-0.7.1-py3-none-any.whl:
Publisher:
publish-pypi.yml on AgentGuard-tech/agentguard
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agentguard_tech-0.7.1-py3-none-any.whl -
Subject digest:
b0764d94fe872eecae3b8bb3e154d7f1be2fe3ac22b76b43e228a5ae12e5c5e3 - Sigstore transparency entry: 1021577144
- Sigstore integration time:
-
Permalink:
AgentGuard-tech/agentguard@22a394e556696d81587f527d452462dee365e250 -
Branch / Tag:
refs/tags/python-v0.7.1 - Owner: https://github.com/AgentGuard-tech
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@22a394e556696d81587f527d452462dee365e250 -
Trigger Event:
push
-
Statement type: