Skip to main content

The Security & Cost Control Layer for AI Agents — Guard, Shield, and Sentinel tiers: budget control, per-agent credential isolation, OAuth scope enforcement, MCP policy enforcement, and compliance reporting.

Project description

AgentGuard SDK

The Security & Cost Control Layer for AI Agents.

AgentGuard adds per-agent credential isolation, OAuth scope enforcement, an agent identity registry, permission drift detection, MCP policy enforcement, compliance reporting, and a full audit trail to any AI agent workflow — with zero cloud dependencies.


Features

Tier Feature
Guard Budget enforcement ($0.50/run) with graceful stop or hard raise
Shield Per-agent encrypted credential vault (Fernet/AES-128)
Shield Agent identity registry with metadata
Shield OAuth scope enforcement with ScopeViolation exceptions
Shield Permission drift detection (baseline snapshots + diff)
Shield Full JSON-lines audit trail per agent
Sentinel MCP tool-call interception with policy evaluation
Sentinel Declarative policy engine (allow/deny lists, rate limits, predicates)
Sentinel YAML policy file support
Sentinel Audit-ready compliance reports (EU AI Act, NIST AI RMF, ISO 42001)

Install

pip install agentguard

Requires Python 3.9+, cryptography, and pyyaml (installed automatically).


Quick Start

1. The guard() wrapper — Guard + Shield + Sentinel

import openai
from agentguard import guard

client = guard(
    openai.OpenAI(),
    budget="$0.50/run",              # Guard tier — hard budget cap
    auth="isolated",                  # Shield tier — use per-agent vault
    agent_id="support-bot",           # Shield / Sentinel tier — identity
    scopes={"openai": ["chat.completions"]},  # Shield tier — allowed scopes
    mcp_policy="policies.yaml",       # Sentinel tier — MCP policy file
    on_violation="block",             # Sentinel tier — block | warn | log
    fallback="gpt-4o-mini",           # Cheaper fallback near budget limit
    on_limit="graceful_stop",         # Return sentinel instead of raising
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}],
)

if not response:
    print(f"Budget exceeded — spent ${client.spent:.4f}")

2. Per-Agent Credential Vault

from agentguard.vault import AgentVault

vault = AgentVault("support-bot")

# Store an API key with allowed scopes
vault.store("openai", "sk-...", scopes=["chat.completions"])

# Retrieve it
cred = vault.get("openai")
print(cred.scopes)      # ["chat.completions"]
# Raw key is available but never logged/printed automatically
api_key = cred.get_key()

# Rotate with automatic audit log entry
vault.rotate("openai", new_key="sk-new-...")

# List services in this vault
print(vault.list_services())  # ["openai"]

Credentials are stored encrypted at ~/.agentguard/vaults/{agent_id}/credentials.enc. The master key lives at ~/.agentguard/master.key (mode 0600, never logged).


3. Agent Identity Registry

from agentguard.registry import Registry

registry = Registry()

registry.register(
    agent_id="support-bot",
    owner="team-cs",
    capabilities=["chat", "search"],
    created_by="kevin@company.com",
)

profile = registry.get("support-bot")
print(profile.owner)           # "team-cs"
print(profile.capabilities)   # ["chat", "search"]

# All registered agents
for p in registry.list():
    print(p.agent_id, p.owner)

# Full audit log for an agent
events = registry.audit_log("support-bot")

4. OAuth Scope Enforcement

from agentguard.scopes import ScopeManager, ScopeViolation

scopes = ScopeManager()

# Define allowed scopes
scopes.set("support-bot", {
    "openai": ["chat.completions"],
    "github": ["repos.read"],
})

# Runtime check — OK
scopes.check("support-bot", "openai", "chat.completions")

# Runtime check — raises ScopeViolation
try:
    scopes.check("support-bot", "openai", "fine-tuning")
except ScopeViolation as e:
    print(f"Blocked: {e}")

# Non-raising check
if scopes.is_allowed("support-bot", "github", "repos.write"):
    ...  # won't reach here

5. Permission Drift Detection

from agentguard.drift import DriftDetector

drift = DriftDetector()

# Save baseline
drift.snapshot("support-bot")

# ... later, scopes or capabilities change ...

changes = drift.check("support-bot")
for c in changes:
    print(c.change_type, c.category, c.field)
    # e.g.: added  scope  openai.fine-tuning

6. Sentinel — MCP Policy Enforcement

Define policies in Python

from agentguard.policies import Policy, PolicyEngine, RateLimit
from agentguard.mcp_interceptor import MCPInterceptor

engine = PolicyEngine()

# Block PII table queries for all agents
engine.add_policy(Policy(
    name="no-pii-queries",
    description="Agents cannot run queries that select from PII tables",
    agent_ids=["*"],
    tools=["database_query"],
    deny_if=lambda params: any(
        t in params.get("query", "").lower()
        for t in ["users", "credentials", "payments"]
    ),
))

# Read-only filesystem for support-bot
engine.add_policy(Policy(
    name="read-only-filesystem",
    description="Support bot can only read files, not write",
    agent_ids=["support-bot"],
    tools=["file_write", "file_delete"],
    action="deny",
))

# Rate limit: max 100 API calls per minute per agent
engine.add_policy(Policy(
    name="rate-limit-api",
    description="Max 100 API calls per minute per agent",
    agent_ids=["*"],
    tools=["*"],
    rate_limit=RateLimit(max_calls=100, window_seconds=60),
))

interceptor = MCPInterceptor(policies=engine, on_violation="block")

result = interceptor.evaluate(
    agent_id="support-bot",
    tool="database_query",
    params={"query": "SELECT * FROM users"},
    context={"user": "kevin@company.com"},
)
print(result.allowed)    # False
print(result.reason)     # "Policy 'no-pii-queries' denied this action..."
print(result.audit_id)   # "evt_abc123..."

Or load from YAML

# policies.yaml
policies:
  - name: no-pii-queries
    description: "Agents cannot query PII tables"
    agents: ["*"]
    tools: ["database_query"]
    deny_params:
      query:
        contains_any: ["users", "credentials", "payments"]

  - name: read-only-fs
    description: "Support bot is read-only"
    agents: ["support-bot"]
    tools: ["file_write", "file_delete"]
    action: deny

  - name: rate-limit
    description: "100 calls/min per agent"
    agents: ["*"]
    tools: ["*"]
    rate_limit:
      max_calls: 100
      window_seconds: 60
from agentguard.policies import PolicyEngine
from agentguard.mcp_interceptor import MCPInterceptor

engine = PolicyEngine.from_yaml("policies.yaml")
interceptor = MCPInterceptor(policies=engine)

Supported deny_params constraints: contains_any, contains_all, matches (regex), equals, not_equals.


7. Sentinel — Compliance Reports

from agentguard.compliance import ComplianceReport
from agentguard.registry import Registry
from agentguard.policies import PolicyEngine

registry = Registry()
engine = PolicyEngine.from_yaml("policies.yaml")

report = ComplianceReport(registry=registry, policy_engine=engine)

# Per-agent activity summary
summary = report.agent_summary("support-bot", since="2026-03-01")
print(summary["mcp_calls"])     # total MCP tool calls
print(summary["denied_calls"])  # blocked calls

# All denied actions
violations = report.violations(since="2026-03-01")
for v in violations:
    print(v["timestamp"], v["agent_id"], v["tool"], v["policy"])

# Compliance posture mapping (EU AI Act, NIST AI RMF, ISO 42001)
posture = report.compliance_posture()
for fw, controls in posture["frameworks"].items():
    print(f"\n{fw}")
    for ctrl in controls:
        print(f"  [{ctrl['status']}] {ctrl['control']}")

# Export to Markdown
report.export_markdown("compliance-report.md")

# Export to JSON
report.export_json("compliance-report.json")

8. Audit Trail

from agentguard import audit_query
from agentguard.audit import log_event

# Query events for an agent
events = audit_query("support-bot", since="2026-03-01")
for event in events:
    print(event["timestamp"], event["action"], event["outcome"])

# Filter by action or outcome
denied = audit_query("support-bot", action="scope_check", outcome="denied")

# Log a custom event
log_event(
    agent_id="support-bot",
    action="custom_action",
    details={"info": "something happened"},
    outcome="allowed",
)

Audit logs are written to ~/.agentguard/audit/{agent_id}/{date}.jsonl. Sensitive fields (key, secret, token, password) are automatically redacted.


File Layout

~/.agentguard/
├── master.key                          # Fernet master key (mode 0600)
├── registry.json                       # Agent identity registry
├── scopes.json                         # Scope policies per agent
├── vaults/
│   └── {agent_id}/
│       └── credentials.enc             # Encrypted credential store
├── snapshots/
│   └── {agent_id}.json                 # Drift baseline snapshots
└── audit/
    └── {agent_id}/
        └── 2026-03-27.jsonl            # JSON-lines audit log

src/agentguard/
├── audit.py             # Audit trail (all tiers)
├── compliance.py        # Sentinel — compliance reports
├── drift.py             # Shield — drift detection
├── guard.py             # Guard + Shield + Sentinel wrapper
├── mcp_interceptor.py   # Sentinel — MCP tool-call interceptor
├── policies.py          # Sentinel — policy engine
├── policy_loader.py     # Sentinel — YAML policy loader
├── registry.py          # Shield — agent identity registry
├── scopes.py            # Shield — OAuth scope enforcement
└── vault.py             # Shield — credential vault

Configuration

AGENTGUARD_HOME env var overrides the default ~/.agentguard directory.

AGENTGUARD_HOME=/var/run/agentguard python my_agent.py

Development

git clone https://github.com/useagentguard/agentguard-sdk
cd agentguard-sdk/sdk
pip install -e ".[dev]"
pytest

License

MIT © AgentGuard

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

useagentguard-0.2.0.tar.gz (39.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

useagentguard-0.2.0-py3-none-any.whl (32.7 kB view details)

Uploaded Python 3

File details

Details for the file useagentguard-0.2.0.tar.gz.

File metadata

  • Download URL: useagentguard-0.2.0.tar.gz
  • Upload date:
  • Size: 39.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for useagentguard-0.2.0.tar.gz
Algorithm Hash digest
SHA256 8986be7cad22502d0f182f704f8adbd8b658c33992b0d2742c3e580cfa99555d
MD5 48c79bb74aeddb652f0b708facec7a0e
BLAKE2b-256 f18144b9d90affa4ecb123c244507368419a41be367ecfc3fb88648a4db704b2

See more details on using hashes here.

File details

Details for the file useagentguard-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: useagentguard-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 32.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for useagentguard-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 65fd407e98942af3b296e88c08d9891386b1d44520573d1c0871b59230888e9b
MD5 a5ab588cdbfe6fabd21d738640d3e77b
BLAKE2b-256 12df772b684c313427a0e9fac9f65fa65b183500db8f0debc0e264415bd09113

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page