CortexHub Python SDK- Runtime governance layer for AI Agents
Project description
CortexHub Python SDK
Runtime Governance for AI Agents - Policy enforcement, PII/secrets detection, complete audit trails with OpenTelemetry.
Installation
# Core SDK
pip install cortexhub
# With framework support (choose one or more)
pip install cortexhub[langgraph] # LangGraph
pip install cortexhub[crewai] # CrewAI
pip install cortexhub[openai-agents] # OpenAI Agents SDK
pip install cortexhub[claude-agents] # Claude Agent SDK
# All frameworks (for development)
pip install cortexhub[all]
Python support: 3.11–3.12. Python 3.13 is not supported.
Quick Start
from cortexhub import init, Framework
# Initialize CortexHub FIRST, before importing your framework
cortex = init(
agent_id="customer_support_agent",
framework=Framework.LANGGRAPH, # or CREWAI, OPENAI_AGENTS, CLAUDE_AGENTS
enable_mcp=True, # default; disable if you don't use MCP
)
# Now import and use your framework
from langgraph.prebuilt import create_react_agent
# Continue with your LangGraph setup...
Supported Frameworks
| Framework | Enum Value | Install |
|---|---|---|
| LangGraph | Framework.LANGGRAPH |
pip install cortexhub[langgraph] |
| CrewAI | Framework.CREWAI |
pip install cortexhub[crewai] |
| OpenAI Agents | Framework.OPENAI_AGENTS |
pip install cortexhub[openai-agents] |
| Claude Agents | Framework.CLAUDE_AGENTS |
pip install cortexhub[claude-agents] |
Tracing Coverage
All frameworks emit run.started and run.completed/run.failed for each run.
Tool spans (tool.invoke) and model spans (llm.call) vary by SDK:
- LangGraph: tool calls via
BaseTool.invoke, LLM calls viaBaseChatModel.invoke/ainvoke - CrewAI: tool calls via
CrewStructuredTool.invoke/BaseTool.run, LLM calls via LiteLLM andBaseLLM.call/acall - OpenAI Agents: tool calls via
function_tool, LLM calls viaOpenAIResponsesModelandOpenAIChatCompletionsModel - Claude Agents: tool calls via
@tooland built-in tool hooks; LLM calls run inside the Claude Code CLI and are not intercepted by the Python SDK
Configuration
# Required: API key
export CORTEXHUB_API_KEY=ch_live_...
Features
- Policy Enforcement - Cloud configuration, local evaluation
- Decision Signing - Ed25519 cryptographic signature on every governance decision; independently verifiable by anyone with the public key — no database access required
- PII Detection - 50+ entity types (full coverage on first run)
- Secrets Detection - 30+ secret types
- Configurable Guardrails - Select specific PII/secret types to redact
- Custom Patterns - Add company-specific regex patterns
- OpenTelemetry - Industry-standard observability
- Framework Adapters - Automatic interception for all major frameworks
- MCP Interception - Governs MCP tool calls without framework-specific hooks
- Privacy Mode - Metadata-only by default, safe for production
- Offline Policy Cache - Enforce last synced policies without backend connectivity
Privacy Modes
# Production (default) - only metadata sent
cortex = init(agent_id="...", framework=..., privacy=True)
# Sends: tool names, arg schemas, PII types detected
# Never: raw values, prompts, responses
# Development - full data for testing policies
cortex = init(agent_id="...", framework=..., privacy=False)
# Also sends: raw args, results, prompts (for policy testing)
MCP Interception
If your agent uses MCP servers, MCP interception is enabled by default:
import cortexhub
cortex = cortexhub.init(
agent_id="my-agent",
framework=cortexhub.Framework.LANGGRAPH,
enable_mcp=True, # default
)
To enable MCP interception without a framework adapter:
cortex = cortexhub.CortexHub(api_key="...")
cortex.enable_mcp()
Offline Policy Cache
Persist policies locally to keep enforcement running if the backend is unreachable:
export CORTEXHUB_ALLOW_OFFLINE_ENFORCEMENT=true
export CORTEXHUB_POLICY_DIR="$HOME/.cortexhub/policies"
When enabled, the SDK loads the most recent policy bundle from disk if it cannot reach the backend during initialization.
Handling Governance Outcomes
Policies are created in the CortexHub dashboard. The SDK fetches and enforces them automatically. Wrap your agent's run call in a try/except to handle each outcome:
import cortexhub
cortex = cortexhub.init("my-agent", cortexhub.Framework.LANGGRAPH)
# Your agent code is unchanged. The SDK intercepts tool calls transparently.
try:
result = workflow.invoke(state, config)
except cortexhub.PolicyViolationError as e:
# A policy explicitly denied a tool call.
print(f"Blocked: {e.reasoning}")
except cortexhub.ApprovalRequiredError as e:
# A tool requires human approval before it runs.
# The SDK polls the control plane and resumes automatically when approved.
result = await cortex.wait_for_approval_and_resume(e, workflow, config)
except cortexhub.ApprovalDeniedError as e:
# A reviewer denied the request.
print(f"Denied: {e.reason}")
except cortexhub.ThrottleError as e:
# A rate-limit policy was triggered.
print(f"Rate limited: {e.reasoning}")
except cortexhub.CircuitBreakError as e:
# A circuit breaker opened (cost spike, anomalous volume, etc.).
print(f"Circuit breaker: {e.reasoning}")
How wait_for_approval_and_resume works
- Polls the CortexHub control plane every few seconds until a decision is made.
- When approved: marks the approval internally and calls
workflow.invoke(None, config). For LangGraph, the SDK usesinterrupt()to checkpoint at the tool call node — the graph resumes with the exact same args, so no LLM re-run occurs and the approval is auto-detected. No call tomark_approval_granted()is needed. - If denied/expired: raises
ApprovalDeniedError. - If the default timeout (300s) is exceeded with no decision: re-raises
ApprovalRequiredErrorwith the sameapproval_idso you can surface it to the user.
# Optional: configure timeout
result = await cortex.wait_for_approval_and_resume(
e, workflow, config,
timeout=120, # seconds to wait (default 300)
poll_interval=3, # seconds between polls (default 3)
)
Per-framework patterns
LangGraph — interrupt() preserves state at the exact tool call; invoke(None, config)
resumes with the same args, auto-approved:
# async
except cortexhub.ApprovalRequiredError as e:
result = await cortex.wait_for_approval_and_resume(e, workflow, config)
CrewAI — sync framework; use the blocking wait_for_approval() helper, then retry:
# sync
except cortexhub.ApprovalRequiredError as e:
cortex.wait_for_approval(e) # blocks until approved (or denied/timeout)
result = crew.kickoff(inputs=inputs) # retry — same tool call auto-approved
OpenAI Agents SDK — async; wait for approval, then retry:
# async
except cortexhub.ApprovalRequiredError as e:
await cortex.wait_for_approval_and_resume(e) # no workflow arg — just waits
result = await Runner.run(agent, messages) # retry
Claude Agent SDK — async; same pattern:
# async
except cortexhub.ApprovalRequiredError as e:
await cortex.wait_for_approval_and_resume(e)
async for message in claude_agent_sdk.query(prompt, tools=tools):
... # retry
MCP — async; retry the specific tool call:
# async
except cortexhub.ApprovalRequiredError as e:
await cortex.wait_for_approval_and_resume(e)
result = await session.call_tool(tool_name, arguments) # retry
Why retrying works (for non-LangGraph frameworks)
When the same tool is called again with the same args, the SDK computes the same
context_hash. Because the approval was tracked in _pending_approvals, the SDK
automatically re-checks the backend status on the retry call — if approved, the
tool is allowed without creating a new approval record. No manual
mark_approval_granted() needed.
Guardrail Configuration
Guardrails control what happens after detection. On first run, the SDK detects all supported PII types. In the dashboard, you choose which detected types to act on (redact/block/allow) for that agent.
Configure in the dashboard:
- Select types to act on: Choose specific PII types (email, phone, etc.)
- Add custom patterns: Regex for company-specific data (employee IDs, etc.)
- Choose action: Redact, block, or monitor only
The SDK applies your configuration automatically for subsequent runs:
# With guardrail policy active:
# Input prompt: "Contact john@email.com about employee EMP-123456"
# After redaction: "Contact [REDACTED-EMAIL_ADDRESS] about employee [REDACTED-CUSTOM_EMPLOYEE_ID]"
# Only configured types are redacted
Important: Initialization Order
Always initialize CortexHub FIRST, before importing your framework:
# ✅ CORRECT
from cortexhub import init, Framework
cortex = init(agent_id="my_agent", framework=Framework.LANGGRAPH)
from langgraph.prebuilt import create_react_agent # Import AFTER init
# ❌ WRONG
from langgraph.prebuilt import create_react_agent # Framework imported first
from cortexhub import init, Framework
cortex = init(...) # Too late!
This ensures:
- CortexHub sets up OpenTelemetry before frameworks that also use it
- Framework decorators/classes are properly wrapped
Architecture
Agent Decides → [CortexHub] → Agent Executes
│
┌─────┴─────┐
│ │
Policy Guardrails
Engine (PII/Secrets)
│ │
└─────┬─────┘
│
Decision Signing
(Ed25519, per-span)
Signed in your env
before leaving it
│
OpenTelemetry
(to backend)
Every governance decision is signed inside your environment, before the span reaches CortexHub. The private key never leaves your process. The public key is registered with the backend and available at a public endpoint — so any auditor can independently verify any decision without database access.
Development
cd python
# Install with all frameworks
uv sync --all-extras
# Run tests
uv run pytest
# Lint
uv run ruff check .
Links
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cortexhub-0.2.10.tar.gz.
File metadata
- Download URL: cortexhub-0.2.10.tar.gz
- Upload date:
- Size: 12.9 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5a828cdbd8d310080ab3725db74fbd80a5ace83a1e3d3157e0fd7e15c5db583d
|
|
| MD5 |
81cb214f6a97977dee517b4173b8a748
|
|
| BLAKE2b-256 |
d006060b5e6cc524d2bc16a2ecfc21103dd096aaf9128a9f67b551e128afb6e1
|
File details
Details for the file cortexhub-0.2.10-py3-none-any.whl.
File metadata
- Download URL: cortexhub-0.2.10-py3-none-any.whl
- Upload date:
- Size: 12.9 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4937ba4ce9c87be44d716b58ce4c4bbbf2eec50e36c2b395c64ba4d813e63138
|
|
| MD5 |
7eb5b5d734961512ce288cb927b5ae04
|
|
| BLAKE2b-256 |
ac1414caa3fb7a56dc469bfa685c35efdd4f66aba9c0806dce594c7f39e43bfc
|