DecisionGuard runtime governance SDK for Python agent frameworks
Project description
DecisionGuard Python SDK
Runtime governance for AI agents. Intercept tool calls before execution and get an ALLOW / BLOCK / CONDITIONAL / ESCALATE verdict from DecisionGuard.
Install
pip install decisionguard # core only (bring your own http client)
pip install "decisionguard[httpx]" # recommended — sync + async
pip install "decisionguard[langchain]" # LangChain + httpx
pip install "decisionguard[all]" # everything
Quick start
from decisionguard import DecisionGuardClient, DGBlockedError
client = DecisionGuardClient.from_env()
# Reads DG_API_KEY and DG_BASE_URL from environment
response = client.audit({
"actor": {"id": "my-agent", "type": "agent", "authority": "supervised"},
"intent": {
"requested_goal": "Deploy updated service to production",
"proposed_action": "helm upgrade my-service --set image.tag=v2.1.0",
},
"environment": "production",
"tool": {
"name": "helm",
"operation": "upgrade",
"resource_name": "my-service",
"change_type": "infrastructure",
},
})
print(response["verdict"]) # ALLOW | BLOCK | CONDITIONAL | ESCALATE
print(response["summary"])
Adding facts to an audit
The facts field signals what kind of data is in play so DecisionGuard can apply the right sensitivity rules:
response = client.audit({
"actor": {"id": "my-agent", "type": "agent", "authority": "supervised"},
"intent": {
"requested_goal": "Export user records to CSV",
"proposed_action": "db.export(table='users')",
},
"environment": "production",
"tool": {"name": "db", "operation": "export"},
"facts": {
"has_sensitive_data": True,
"data_classifications": ["PII", "financial"],
"risk_signals": ["bulk_export", "cross_border_transfer"],
},
})
LangChain
Wrap any BaseTool so DecisionGuard is consulted before every call:
from langchain_community.tools import ShellTool
from decisionguard import DGGuardedTool, guard_tools, DecisionGuardClient
client = DecisionGuardClient.from_env()
shell = ShellTool()
# Wrap a single tool
guarded_shell = DGGuardedTool(
inner_tool=shell,
dg_client=client,
actor_id="my-langchain-agent",
environment="production",
data_classifications=["infrastructure"],
risk_signals=["shell_execution"],
)
# Or wrap a whole list at once
guarded_tools = guard_tools(
[shell, tool2, tool3],
dg_client=client,
actor_id="my-langchain-agent",
environment="production",
)
# Drop them into your agent exactly like the originals
agent = initialize_agent(tools=guarded_tools, ...)
Async (_arun) is also supported — requires httpx.
CrewAI
from decisionguard import DecisionGuardCrewAuditor, DecisionGuardClient
client = DecisionGuardClient.from_env()
auditor = DecisionGuardCrewAuditor(client, actor_id="crewai-deployer")
# Guard an individual task before crew execution
auditor.audit_task(
task_description="Merge feature branch to main",
agent_role="deployer",
tool_name="git",
tool_args={"command": "merge feature/new-model"},
data_classifications=["source_code"],
risk_signals=["branch_merge"],
)
# Or wrap a plain function
@auditor.guard_tool(tool_name="run_tests", agent_role="qa")
def run_tests(suite: str) -> str:
...
AutoGen
from decisionguard import DecisionGuardAuditor, DecisionGuardClient
client = DecisionGuardClient.from_env()
auditor = DecisionGuardAuditor(client, actor_id="autogen-executor")
# Use as a function-call hook inside an AssistantAgent
hook = auditor.create_hook(goal="Execute database maintenance")
# Returns True (allowed) or False (blocked/escalated)
allowed = hook("drop_table", {"table": "sessions"}, sender_name="assistant")
# Or call directly with facts
auditor.audit_function_call(
function_name="send_report",
arguments={"to": "finance@co.com"},
goal="Email quarterly report",
data_classifications=["financial", "PII"],
risk_signals=["external_email"],
)
OpenAI Agents SDK
from decisionguard import DecisionGuardRail, DecisionGuardClient
client = DecisionGuardClient.from_env()
rail = DecisionGuardRail(
client,
actor_id="openai-agent",
on_block=lambda r: print("Blocked:", r["summary"]),
)
# Call before every tool execution
rail.before_tool_call(
tool_name="send_email",
tool_args={"to": "team@company.com", "body": "Deploying now"},
agent_goal="notify team of deployment",
)
Fact-checking
Verify content for misinformation, logical errors, unsupported claims, and more:
from decisionguard import DecisionGuardClient
client = DecisionGuardClient.from_env()
result = client.fact_check(
content="The EU AI Act was signed into law in 2023 and applies to all AI systems globally.",
context="Legal compliance review",
checks=["misinformation", "errors", "unsupported_claims"],
)
print(result["verdict"]) # PASS | FAIL | WARN | INCOMPLETE
print(result["dg_verdict"]) # ALLOW | BLOCK | REQUIRE_APPROVAL
print(result["summary"])
print(result["review_id"]) # persisted audit trail
for issue in result["issues"]:
print(f"[{issue['severity'].upper()}] {issue['description']}")
if issue.get("suggestion"):
print(f" → {issue['suggestion']}")
Available check types: misinformation, inconsistencies, errors, incompleteness, unsupported_claims, logical_errors, missing_citations.
Async:
result = await client.afact_check(
content="...",
checks=["misinformation", "errors"],
)
Auto-audit (observe-only)
Record every action without blocking. Good for telemetry, shadow mode, and gradual rollouts:
client.auto_audit(
tool_name="vector_search",
action_summary="Semantic search over customer embeddings",
parameters={"query": "refund policy", "top_k": 5},
environment="production",
resource="customer-vectors",
)
# Async
await client.aauto_audit(
tool_name="vector_search",
action_summary="Semantic search over customer embeddings",
environment="production",
)
List reviews
Retrieve the audit trail for your tenant:
reviews = client.list_reviews(
limit=20,
decision="BLOCK",
environment="production",
change_type="infrastructure",
)
for r in reviews:
print(r["id"], r["verdict"], r["summary"])
# Async
reviews = await client.alist_reviews(decision="ESCALATE")
Batch audit
Submit up to 50 audits in a single round-trip:
result = client.batch_audit([
{
"actor": {"id": "agent-1", "type": "agent", "authority": "supervised"},
"intent": {"requested_goal": "Read logs", "proposed_action": "tail /var/log/app.log"},
"environment": "production",
"tool": {"name": "bash", "operation": "read"},
},
{
"actor": {"id": "agent-2", "type": "agent", "authority": "autonomous"},
"intent": {"requested_goal": "Send alert", "proposed_action": "slack.post(...)"},
"environment": "production",
"tool": {"name": "slack", "operation": "post"},
},
])
for item in result["results"]:
print(item["index"], item["verdict"])
# Async
result = await client.abatch_audit(reviews)
Resources
List active resources registered to the tenant:
resources = client.list_resources(resource_type="database")
for r in resources["resources"]:
print(r["name"], r["resource_type"])
# Async
resources = await client.alist_resources(tag="prod")
Identity snapshot
Retrieve the identity context that was recorded at audit time:
identity = client.get_identity(review_id)
print(identity["actor_id"], identity["authority"])
# Async
identity = await client.aget_identity(review_id)
Fetch a stored review
Poll for an approval decision after ESCALATE or REQUIRE_APPROVAL:
review = client.get_review(result["review_id"])
print(review["verdict"])
Tracing multi-step workflows
Tie a chain of related audits together with a shared trace ID:
traced = client.with_trace("trace-abc-123")
# All audits made with `traced` share the same trace_id
traced.audit({...})
traced.audit({...})
CI / workflow pipelines
# One-liner for shell scripts, GitHub Actions, n8n, etc.
dg-audit "Deploy to production" "helm upgrade api" helm upgrade ci-pipeline
from decisionguard import audit_or_fail, DecisionGuardClient
client = DecisionGuardClient.from_env()
audit_or_fail(
client,
actor_id="ci-pipeline",
goal="Deploy to production",
action="helm upgrade api --set image.tag=v3",
tool_name="helm",
operation="upgrade",
environment="production",
)
# Raises RuntimeError on non-ALLOW verdict
Error handling
from decisionguard import DecisionGuardClient, DGBlockedError, DGEscalatedError
client = DecisionGuardClient.from_env()
try:
client.audit(request)
except DGBlockedError as e:
print("Blocked:", e.response["summary"])
except DGEscalatedError as e:
print("Needs human approval:", e.response["summary"])
# Pause and wait for review at e.response["links"]["review_url"]
Environment variables
| Variable | Required | Description |
|---|---|---|
DG_API_KEY |
Yes | Your tenant API key |
DG_BASE_URL |
Yes | e.g. https://decision-guard.com |
Links
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file decisionguard-0.1.0.tar.gz.
File metadata
- Download URL: decisionguard-0.1.0.tar.gz
- Upload date:
- Size: 11.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
652e43d17131db4babdf8e48f122134c7c1f82e8b8184100f0b5b251b01f467b
|
|
| MD5 |
6d79bd1775f5c137a663c8c57a300a70
|
|
| BLAKE2b-256 |
86e9ee2d8c00b270b13630d123de2a08b89e60180ab7ee70ef1284416abff374
|
Provenance
The following attestation bundles were made for decisionguard-0.1.0.tar.gz:
Publisher:
workflow.yml on DecisionGuard/pip
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
decisionguard-0.1.0.tar.gz -
Subject digest:
652e43d17131db4babdf8e48f122134c7c1f82e8b8184100f0b5b251b01f467b - Sigstore transparency entry: 1429003318
- Sigstore integration time:
-
Permalink:
DecisionGuard/pip@25fc35c203b0aa1473dbafa61912377fa66324f7 -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/DecisionGuard
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
workflow.yml@25fc35c203b0aa1473dbafa61912377fa66324f7 -
Trigger Event:
push
-
Statement type:
File details
Details for the file decisionguard-0.1.0-py3-none-any.whl.
File metadata
- Download URL: decisionguard-0.1.0-py3-none-any.whl
- Upload date:
- Size: 15.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3506812ce4af6b9365b8cfcd016b706e8a0b08455f6646b85d9aa5c6182ce17d
|
|
| MD5 |
2827b76dbb6887bc9c8404dd263e0ebc
|
|
| BLAKE2b-256 |
fd38deeb4afc6e985caf11ac2bae36899209613ec02c1970fccd2a694e9ab38c
|
Provenance
The following attestation bundles were made for decisionguard-0.1.0-py3-none-any.whl:
Publisher:
workflow.yml on DecisionGuard/pip
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
decisionguard-0.1.0-py3-none-any.whl -
Subject digest:
3506812ce4af6b9365b8cfcd016b706e8a0b08455f6646b85d9aa5c6182ce17d - Sigstore transparency entry: 1429003336
- Sigstore integration time:
-
Permalink:
DecisionGuard/pip@25fc35c203b0aa1473dbafa61912377fa66324f7 -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/DecisionGuard
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
workflow.yml@25fc35c203b0aa1473dbafa61912377fa66324f7 -
Trigger Event:
push
-
Statement type: