Trust & governance layer for OpenAI Agents SDK — policy enforcement, trust-gated handoffs, and Merkle audit trails
Project description
openai-agents-trust
Trust & governance layer for the OpenAI Agents SDK. Adds policy enforcement, trust-gated handoffs, and tamper-evident audit trails using native SDK guardrails and hooks.
Built by AgentMesh — the open-source trust layer for multi-agent systems. Similar integrations merged into Dify (65K ⭐), LlamaIndex (47K ⭐), and Microsoft Agent-Lightning (15K ⭐).
Install
pip install openai-agents-trust
Quick Start
from agents import Agent, Runner
from openai_agents_trust import (
trust_input_guardrail,
policy_input_guardrail,
GovernanceHooks,
TrustGuardrailConfig,
PolicyGuardrailConfig,
TrustScorer,
GovernancePolicy,
AuditLog,
)
# Shared governance state
scorer = TrustScorer()
audit = AuditLog()
policy = GovernancePolicy(
name="production",
max_tool_calls=20,
blocked_patterns=[r"DROP TABLE", r"rm -rf", r"eval\("],
min_trust_score=0.7,
)
# Create guardrails
trust_config = TrustGuardrailConfig(scorer=scorer, min_score=0.7, audit_log=audit)
policy_config = PolicyGuardrailConfig(policy=policy, audit_log=audit)
# Attach to agents
agent = Agent(
name="researcher",
instructions="You are a research assistant.",
input_guardrails=[
trust_input_guardrail(trust_config),
policy_input_guardrail(policy_config),
],
)
# Run with governance hooks
result = await Runner.run(
agent,
input="Analyze this data",
run_config=RunConfig(hooks=GovernanceHooks(policy=policy, scorer=scorer, audit_log=audit)),
)
# Verify audit integrity
print(f"Audit entries: {len(audit)}")
print(f"Chain valid: {audit.verify_chain()}")
Trust-Gated Handoffs
from openai_agents_trust import trust_gated_handoff
billing_agent = Agent(name="billing", instructions="Handle billing.")
support_agent = Agent(name="support", instructions="Handle support.")
triage = Agent(
name="triage",
handoffs=[
trust_gated_handoff(billing_agent, scorer=scorer, min_score=0.8),
trust_gated_handoff(support_agent, scorer=scorer, min_score=0.6),
],
)
Features
| Feature | SDK Hook | Description |
|---|---|---|
| Trust Guardrail | InputGuardrail |
Blocks agents below trust threshold |
| Policy Guardrail | InputGuardrail |
Enforces blocked patterns, tool limits |
| Content Guardrail | OutputGuardrail |
Validates output against policies |
| Governance Hooks | RunHooksBase |
Tracks tools, audits handoffs, scores trust |
| Trust-Gated Handoff | is_enabled |
Disables handoffs to untrusted agents |
| Merkle Audit | — | Tamper-evident chain of all decisions |
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file openai_agents_trust-0.1.0.tar.gz.
File metadata
- Download URL: openai_agents_trust-0.1.0.tar.gz
- Upload date:
- Size: 10.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3f67e36cc9f701007c17906ea3e5044cf8d5c341da17495c11d9679cb374dca0
|
|
| MD5 |
c18870d96524c6a85b3c92c60fa93a7c
|
|
| BLAKE2b-256 |
a9df15bee1663062886abc8ca332985e4e4dd103b8f5c3b83131bad246644557
|
File details
Details for the file openai_agents_trust-0.1.0-py3-none-any.whl.
File metadata
- Download URL: openai_agents_trust-0.1.0-py3-none-any.whl
- Upload date:
- Size: 11.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3c6b5fb0a49430326260393f7a97a01d00a58454322210aff8639993a1b04e57
|
|
| MD5 |
ef44ae6879f1fcfaf6e253310b301689
|
|
| BLAKE2b-256 |
42158a68c4bd5e546fd2b28d614a74d36535f559fe5bbbdf25ca50b2f88d2ff0
|