Compliance infrastructure for AI agents — runtime monitoring, audit trails, guardrails.
Project description
AgentGuard
Compliance infrastructure for AI agents
Runtime monitoring, audit trails, guardrails, and auto-generated compliance documentation for autonomous AI agents. Three lines of code to make your agent compliant with the EU AI Act, Colorado AI Act, and Texas TRAIGA.
Zero external dependencies. Runs anywhere Python runs.
from agentguard import monitor, ctx
@monitor(regulation="eu-ai-act", audit="sqlite")
def my_agent(task):
ctx.log_tool("web_search", query=task)
ctx.log_reasoning("Searching for relevant information")
return execute(task)
That's it. Every call is now monitored, audited, and checked against guardrails.
Why AgentGuard
AI agents are making autonomous decisions in production — hiring candidates, approving loans, triaging patients. Regulations are catching up:
| Regulation | Enforcement | Scope |
|---|---|---|
| EU AI Act (high-risk) | August 2, 2026 | Any AI system sold/used in EU |
| Colorado AI Act | February 1, 2026 | High-risk automated decisions |
| Texas TRAIGA | September 1, 2025 | AI systems deployed in Texas |
AgentGuard gives you compliance-by-default: every agent decision gets a complete audit trail, risk classification, and guardrail enforcement — with ~32 microseconds of overhead per call.
Install
pip install agentguard
Optional extras:
pip install agentguard[yaml] # YAML config file support
pip install agentguard[langchain] # LangChain callback handler
pip install agentguard[dev] # pytest, coverage
Quick Start
Basic monitoring
from agentguard import monitor, ctx
@monitor
def research_agent(topic):
ctx.log_tool("search", query=topic)
ctx.log_model("gpt-4", tokens=150)
ctx.log_reasoning("Found 3 relevant sources, synthesizing")
return {"summary": "...", "sources": [...]}
result = research_agent("quantum computing trends")
With guardrails
@monitor(
guardrails=["pii", "scope"],
pii_action="block", # block | warn | redact
allowed_scopes=["research_agent"], # restrict to allowed functions
audit="sqlite", # stdout | jsonfile | sqlite
audit_path="audit.db",
)
def research_agent(query):
ctx.log_tool("search", query=query)
return search(query)
# This will raise GuardrailViolation:
research_agent("Find info on john@example.com")
Async support
@monitor(audit="jsonfile", audit_path="events.jsonl")
async def async_agent(task):
ctx.log_tool("api_call", endpoint="/data")
result = await fetch_data(task)
return result
Nested agents
Parent-child relationships are tracked automatically:
@monitor(agent_name="orchestrator")
def orchestrator(task):
ctx.log_reasoning("Delegating to specialist")
return specialist(task)
@monitor(agent_name="specialist")
def specialist(task):
ctx.log_tool("process", task=task)
return f"Processed: {task}"
Decorator Forms
@monitor # zero-config
@monitor() # explicit empty
@monitor(regulation="eu-ai-act", audit="sqlite") # fully configured
Context API
Inside any @monitor-decorated function, use ctx to capture structured data:
from agentguard import ctx
ctx.log_tool("tool_name", key="value") # log tool/action usage
ctx.log_reasoning("explanation") # log decision reasoning
ctx.log_model("gpt-4", tokens=250) # log model and token usage
ctx.set("custom_key", "custom_value") # arbitrary metadata
ctx.get("custom_key") # retrieve metadata
ctx.active # True inside @monitor
All ctx methods are no-ops outside @monitor — your code never crashes.
Guardrails
PII Detection
Regex-based detection of emails, phone numbers, SSNs, credit cards, and IP addresses:
@monitor(guardrails=["pii"], pii_action="block")
def agent(data):
... # raises GuardrailViolation if PII found in input
@monitor(guardrails=["pii"], pii_action="warn")
def agent(data):
... # logs warning but continues
@monitor(guardrails=["pii"], pii_action="redact")
def agent(data):
... # replaces PII with [REDACTED] in output
Pre-execution guardrails can block. Post-execution guardrails only warn or redact — because side effects already happened.
Scope Enforcement
Restrict which functions can run:
@monitor(guardrails=["scope"], allowed_scopes=["read_data", "search"])
def unauthorized_action():
... # raises GuardrailViolation
Audit Trail
Three built-in backends:
| Backend | Use case | Config |
|---|---|---|
stdout |
Development/debugging | audit="stdout" |
jsonfile |
Simple production logging | audit="jsonfile", audit_path="events.jsonl" |
sqlite |
Queryable audit trail | audit="sqlite", audit_path="audit.db" |
The audit trail runs on a background daemon thread with a bounded queue (10K events). It never blocks your agent and never crashes — even if the backend fails.
# Query the audit trail programmatically
from agentguard.audit.backends import SQLiteBackend
backend = SQLiteBackend(path="audit.db")
events = backend.query(limit=50, agent_name="my_agent")
Configuration
Resolution order
kwargs > env vars > config file > defaults
Config file
Place agentguard.json (or .yaml/.yml) in your project root — AgentGuard walks up from cwd to find it:
{
"audit_backend": "sqlite",
"audit_path": "audit.db",
"guardrails": ["pii", "scope"],
"pii_action": "warn",
"allowed_scopes": ["research", "summarize"],
"regulation": "eu-ai-act"
}
Environment variables
AGENTGUARD_AUDIT_BACKEND=sqlite
AGENTGUARD_AUDIT_PATH=audit.db
AGENTGUARD_PII_ACTION=block
AGENTGUARD_REGULATION=eu-ai-act
AGENTGUARD_ALLOWED_SCOPES=read,write,search
AGENTGUARD_GUARDRAILS=pii,scope
CLI
# Classify risk level of a Python file
agentguard classify agent.py
agentguard classify agent.py --json
# Check compliance against a regulation
agentguard check agent.py --regulation eu-ai-act
# Query the audit trail
agentguard trail --backend jsonfile --path events.jsonl --limit 20
agentguard trail --backend sqlite --path audit.db --agent my_agent
# Generate Annex IV documentation
agentguard docs --name "My AI System" --output compliance.md
agentguard docs --name "My AI System" --trail-path audit.db
EU AI Act Compliance
AgentGuard maps your agent against Annex III high-risk categories (biometric, employment, education, law enforcement, etc.) and generates Annex IV technical documentation:
from agentguard.regulations.eu_ai_act import EUAIActRegulation
reg = EUAIActRegulation()
# Classify risk
risk = reg.classify_risk({
"purpose": "candidate screening",
"domain": "recruitment",
})
# Returns: "high"
# Full compliance check
result = reg.check_compliance({
"purpose": "candidate screening",
"domain": "recruitment",
"documentation": True,
"human_oversight": True,
"audit_trail": True,
"risk_management": True,
})
print(result.compliant) # True
print(result.recommendations) # []
LangChain Integration
from agentguard import monitor, ctx
from agentguard.integrations.langchain import AgentGuardCallbackHandler
@monitor(regulation="eu-ai-act", audit="sqlite", audit_path="audit.db")
def langchain_agent(query):
chain = LLMChain(llm=llm, prompt=prompt)
return chain.invoke(
{"query": query},
config={"callbacks": [AgentGuardCallbackHandler()]}
)
The callback handler automatically captures model name, token usage, and tool calls into the AgentGuard context.
Design Principles
- Never crash. AgentGuard is fail-open: if monitoring fails, your agent runs unmonitored. Audit never blocks. Context methods are no-ops outside
@monitor. - Zero dependencies. Core SDK uses only the Python standard library. No supply chain risk.
- Microsecond overhead. ~32 microseconds per decorated call. Background audit thread. No I/O on the hot path.
- Thread-safe and async-safe. Uses
contextvarsfor proper isolation across threads and async tasks.
Architecture
@monitor decorator
|
+-- Pre-guardrails (may block)
| +-- PII check
| +-- Scope check
|
+-- Function execution (with DecisionContext active)
| +-- ctx.log_tool()
| +-- ctx.log_reasoning()
| +-- ctx.log_model()
|
+-- Post-guardrails (warn/redact only)
|
+-- AuditTrail.emit() (non-blocking, background thread)
+-- StdoutBackend / JSONFileBackend / SQLiteBackend
Development
git clone https://github.com/agentguardinc/agentguard-python.git
cd agentguard-python
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
pytest --cov=agentguard --cov-report=term-missing
License
Apache 2.0 — see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentguardinc-0.1.0.tar.gz.
File metadata
- Download URL: agentguardinc-0.1.0.tar.gz
- Upload date:
- Size: 34.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
793494fe754375d63b5c5b954460e0c4fc65b46f36a2a7cea50ac023d032737c
|
|
| MD5 |
825bcec82116875344fab578ce260904
|
|
| BLAKE2b-256 |
5aff10af17f9a36585c7528f921bec8f92d694ff6a38107e8886ff607c87ecec
|
File details
Details for the file agentguardinc-0.1.0-py3-none-any.whl.
File metadata
- Download URL: agentguardinc-0.1.0-py3-none-any.whl
- Upload date:
- Size: 33.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6aa50b4c846cccea0f769e6b5a75510058b8b35eaa3e94bdef660ae0c6750305
|
|
| MD5 |
a30014f6a16f775f2c6a6183d309be0b
|
|
| BLAKE2b-256 |
4639fd0431ce4ad9a27372867a2cd846ac3e109c334558187d3ad5f4cd37cba7
|