CortexHub Python SDK — Runtime governance for AI systems. One decorator, any tool, any framework.
Project description
CortexHub Python SDK
Runtime governance for AI systems. One decorator. Any tool. Any framework.
pip install cortexhub
Python 3.10–3.12.
How it works
Add @cx.tool to any function. Every call to that function — from a LangGraph agent, a CrewAI crew, a FastAPI endpoint, a cron job, or plain Python — goes through your policies. Configure policies in the control plane. Nothing in code.
import cortexhub
cx = cortexhub.init("healthcare-agent")
@cx.tool
def prescribe_medication(patient_id: str, medication: str, dosage: str) -> str:
"""Prescribe medication to a patient."""
return db.prescribe(patient_id, medication, dosage)
# That's the integration. Everything else — policies, approvals, audit trail — is in the control plane.
Full example
import cortexhub
cx = cortexhub.init("my-agent", api_key="...") # or CORTEXHUB_API_KEY env var
@cx.tool
def process_payment(customer_id: str, amount: str, reference: str) -> dict:
"""Process a payment."""
return payment_service.charge(customer_id, amount, reference)
@cx.tool
def send_notification(customer_id: str, message: str) -> dict:
"""Send a notification to a customer."""
return notifications.send(customer_id, message)
# Govern code you don't own
cx.enforce("stripe.charge", {"amount": 5000, "customer_id": "cust_123"})
stripe.charge(amount=5000, customer_id="cust_123")
# LLM guardrails — explicit, before any LLM call
cx.scan_prompt(messages) # raises if PII/secrets/injection found
response = llm.invoke(messages)
# LLM telemetry — signed llm.call span with token counts
async with cx.llm_call(model="gpt-4o", messages=messages) as checked:
response = await llm.ainvoke(checked)
cx.record_llm_result(response)
# Run lifecycle — session tracking, policy sync, telemetry flush
async with cx.run():
try:
result = await my_agent(user_input)
except cortexhub.PolicyViolationError as e:
print(f"Blocked: {e.reasoning}")
except cortexhub.ApprovalRequiredError as e:
# Poll until reviewer decides, then re-run the exact tool call
await cx.wait_for_approval(e)
result = cx.retry_tool(e) # exact original args — guaranteed hash match
except cortexhub.ApprovalDeniedError as e:
print(f"Denied: {e.reason}")
except cortexhub.ThrottleError as e:
print(f"Rate limited: {e.reasoning}")
except cortexhub.CircuitBreakError as e:
print(f"Circuit breaker: {e.reasoning}")
What you get
| Capability | How |
|---|---|
| Policy enforcement | Block, require approval, throttle, circuit-break based on policies in the control plane |
| Guardrails | PII, secrets, prompt injection detection on LLM prompts |
| Cryptographic audit trail | Every tool call signed with Ed25519 — independently verifiable, no database needed |
| Human-in-the-loop approvals | cx.wait_for_approval() + cx.retry_tool() — deterministic, no LLM re-run |
| Telemetry | OTel-based spans for tool calls, LLM calls, run durations, token usage |
| EU AI Act alignment | Article 12 compliance export profiles |
| Offline capable | Policies cached locally, works without network |
| Multi-agent | Delegation chains, scope inheritance enforcement |
Framework compatibility
GovernedTool (returned by @cx.tool) is a plain Python class with zero framework imports. It exposes .name, .description, .args_schema, .invoke(), and .ainvoke() — the duck-type interface accepted by LangGraph's ToolNode, LangChain's bind_tools, and similar framework tool consumers.
Works equally with: LangGraph, CrewAI, OpenAI Agents SDK, Claude Agents, raw Python, FastAPI, Flask, cron jobs, CLI scripts — anything.
API at a glance
cx = cortexhub.init("agent-id") # initialise
@cx.tool # govern any callable
def my_tool(arg: str) -> str: ...
cx.enforce("tool_name", args) # govern third-party/unwrappable code
cx.scan_prompt(messages) # guardrails — raises on violation
findings = cx.check_prompt(messages) # guardrails — returns findings, never raises
async with cx.llm_call("gpt-4o", msgs): # LLM telemetry + guardrails
resp = await llm.ainvoke(msgs)
cx.record_llm_result(resp)
async with cx.run(): ... # run lifecycle
await cx.wait_for_approval(e) # async poll — raises on denial/timeout
cx.wait_for_approval_sync(e) # sync poll
cx.retry_tool(e) # re-call with exact original args
# Multi-agent
cx_child = cortexhub.init(
"child-agent",
parent_agent_id=cx_parent.agent_id,
delegation_chain=cx_parent.child_delegation_chain,
)
Governance errors
| Exception | When |
|---|---|
PolicyViolationError |
Policy blocked execution |
ApprovalRequiredError |
Human approval needed |
ApprovalDeniedError |
Reviewer denied or request expired |
ThrottleError |
Rate limit triggered |
CircuitBreakError |
Circuit breaker open |
GuardrailViolationError |
PII/secrets/injection detected (block action) |
All inherit from CortexHubError.
Configuration
| Env var | Default | Description |
|---|---|---|
CORTEXHUB_API_KEY |
— | API key (required) |
CORTEXHUB_ALLOW_OFFLINE_ENFORCEMENT |
false |
Use local policy cache when backend unavailable |
CORTEXHUB_POLICY_DIR |
~/.cortexhub/policies/ |
Local policy cache directory |
Privacy (whether telemetry is redacted or sent raw) is set per project in the CortexHub dashboard (Privacy: On / Off). The SDK uses that value; there is no env or code override.
Development
git clone https://github.com/CortexHub-AI/cortexhub-python
cd cortexhub-python
uv sync --all-extras
uv run pytest
uv run ruff check src/
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cortexhub-0.3.0.tar.gz.
File metadata
- Download URL: cortexhub-0.3.0.tar.gz
- Upload date:
- Size: 12.9 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e3043605842612eba95eef52cfb7575013773fd3d506487c2c4f5a85b383090b
|
|
| MD5 |
b96db7ce4d1c4ac1e617023dbc6f9327
|
|
| BLAKE2b-256 |
6ae7e164949f2e774828c0fc26818866b3c68bc849cc6bde5d45313c924adc89
|
File details
Details for the file cortexhub-0.3.0-py3-none-any.whl.
File metadata
- Download URL: cortexhub-0.3.0-py3-none-any.whl
- Upload date:
- Size: 12.9 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
191670db4c1cacf96bc5f0dc58f047600b9755e14cdd96b77d6482816e39ff67
|
|
| MD5 |
510755299637b1521460fe2daea3990b
|
|
| BLAKE2b-256 |
05c4407b664d6988087156af7a4d124c7c5c26ee10136028aaa480f85d32c79e
|