Skip to main content

CortexHub Python SDK- Runtime governance layer for AI Agents

Project description

CortexHub Python SDK

Runtime Governance for AI Agents - Policy enforcement, PII/secrets detection, complete audit trails with OpenTelemetry.

Installation

# Core SDK
pip install cortexhub

# With framework support (choose one or more)
pip install cortexhub[langgraph]      # LangGraph
pip install cortexhub[crewai]         # CrewAI
pip install cortexhub[openai-agents]  # OpenAI Agents SDK
pip install cortexhub[claude-agents]  # Claude Agent SDK

# All frameworks (for development)
pip install cortexhub[all]

Python support: 3.11–3.12. Python 3.13 is not supported.

Quick Start

from cortexhub import init, Framework

# Initialize CortexHub FIRST, before importing your framework
cortex = init(
    agent_id="customer_support_agent",
    framework=Framework.LANGGRAPH,  # or CREWAI, OPENAI_AGENTS, CLAUDE_AGENTS
    enable_mcp=True,  # default; disable if you don't use MCP
)

# Now import and use your framework
from langgraph.prebuilt import create_react_agent

# Continue with your LangGraph setup...

Supported Frameworks

Framework Enum Value Install
LangGraph Framework.LANGGRAPH pip install cortexhub[langgraph]
CrewAI Framework.CREWAI pip install cortexhub[crewai]
OpenAI Agents Framework.OPENAI_AGENTS pip install cortexhub[openai-agents]
Claude Agents Framework.CLAUDE_AGENTS pip install cortexhub[claude-agents]

Tracing Coverage

All frameworks emit run.started and run.completed/run.failed for each run. Tool spans (tool.invoke) and model spans (llm.call) vary by SDK:

  • LangGraph: tool calls via BaseTool.invoke, LLM calls via BaseChatModel.invoke/ainvoke
  • CrewAI: tool calls via CrewStructuredTool.invoke/BaseTool.run, LLM calls via LiteLLM and BaseLLM.call/acall
  • OpenAI Agents: tool calls via function_tool, LLM calls via OpenAIResponsesModel and OpenAIChatCompletionsModel
  • Claude Agents: tool calls via @tool and built-in tool hooks; LLM calls run inside the Claude Code CLI and are not intercepted by the Python SDK

Configuration

# Required: API key
export CORTEXHUB_API_KEY=ch_live_...

Features

  • Policy Enforcement - Cloud configuration, local evaluation
  • Decision Signing - Ed25519 cryptographic signature on every governance decision; independently verifiable by anyone with the public key — no database access required
  • PII Detection - 50+ entity types (full coverage on first run)
  • Secrets Detection - 30+ secret types
  • Configurable Guardrails - Select specific PII/secret types to redact
  • Custom Patterns - Add company-specific regex patterns
  • OpenTelemetry - Industry-standard observability
  • Framework Adapters - Automatic interception for all major frameworks
  • MCP Interception - Governs MCP tool calls without framework-specific hooks
  • Privacy Mode - Metadata-only by default, safe for production
  • Offline Policy Cache - Enforce last synced policies without backend connectivity

Privacy Modes

# Production (default) - only metadata sent
cortex = init(agent_id="...", framework=..., privacy=True)
# Sends: tool names, arg schemas, PII types detected
# Never: raw values, prompts, responses

# Development - full data for testing policies  
cortex = init(agent_id="...", framework=..., privacy=False)
# Also sends: raw args, results, prompts (for policy testing)

MCP Interception

If your agent uses MCP servers, MCP interception is enabled by default:

import cortexhub

cortex = cortexhub.init(
    agent_id="my-agent",
    framework=cortexhub.Framework.LANGGRAPH,
    enable_mcp=True,  # default
)

To enable MCP interception without a framework adapter:

cortex = cortexhub.CortexHub(api_key="...")
cortex.enable_mcp()

Offline Policy Cache

Persist policies locally to keep enforcement running if the backend is unreachable:

export CORTEXHUB_ALLOW_OFFLINE_ENFORCEMENT=true
export CORTEXHUB_POLICY_DIR="$HOME/.cortexhub/policies"

When enabled, the SDK loads the most recent policy bundle from disk if it cannot reach the backend during initialization.

Policy Enforcement

Policies are created in the CortexHub dashboard from detected risks. The SDK automatically fetches and enforces them:

from cortexhub.errors import PolicyViolationError, ApprovalRequiredError

# Policies are fetched automatically during init()
# If policies exist, enforcement mode is enabled

try:
    agent.run("Process a $10,000 refund")
except PolicyViolationError as e:
    print(f"Blocked by policy: {e.policy_name}")
    print(f"Reason: {e.reasoning}")
except ApprovalRequiredError as e:
    print(f"\n⏸️  APPROVAL REQUIRED")
    print(f"   Approval ID: {e.approval_id}")
    print(f"   Tool: {e.tool_name}")
    print(f"   Reason: {e.reason}")
    print(f"   Expires: {e.expires_at}")
    print(f"\n   Decision endpoint: {e.decision_endpoint}")
    print(f"   Configure a webhook to receive approval.decisioned event")

Guardrail Configuration

Guardrails control what happens after detection. On first run, the SDK detects all supported PII types. In the dashboard, you choose which detected types to act on (redact/block/allow) for that agent.

Configure in the dashboard:

  1. Select types to act on: Choose specific PII types (email, phone, etc.)
  2. Add custom patterns: Regex for company-specific data (employee IDs, etc.)
  3. Choose action: Redact, block, or monitor only

The SDK applies your configuration automatically for subsequent runs:

# With guardrail policy active:
# Input prompt: "Contact john@email.com about employee EMP-123456"
# After redaction: "Contact [REDACTED-EMAIL_ADDRESS] about employee [REDACTED-CUSTOM_EMPLOYEE_ID]"
# Only configured types are redacted

Important: Initialization Order

Always initialize CortexHub FIRST, before importing your framework:

# ✅ CORRECT
from cortexhub import init, Framework
cortex = init(agent_id="my_agent", framework=Framework.LANGGRAPH)

from langgraph.prebuilt import create_react_agent  # Import AFTER init

# ❌ WRONG
from langgraph.prebuilt import create_react_agent  # Framework imported first
from cortexhub import init, Framework
cortex = init(...)  # Too late!

This ensures:

  1. CortexHub sets up OpenTelemetry before frameworks that also use it
  2. Framework decorators/classes are properly wrapped

Architecture

Agent Decides → [CortexHub] → Agent Executes
                    │
              ┌─────┴─────┐
              │           │
         Policy      Guardrails
         Engine      (PII/Secrets)
              │           │
              └─────┬─────┘
                    │
            Decision Signing
            (Ed25519, per-span)
            Signed in your env
            before leaving it
                    │
              OpenTelemetry
               (to backend)

Every governance decision is signed inside your environment, before the span reaches CortexHub. The private key never leaves your process. The public key is registered with the backend and available at a public endpoint — so any auditor can independently verify any decision without database access.

Development

cd python

# Install with all frameworks
uv sync --all-extras

# Run tests
uv run pytest

# Lint
uv run ruff check .

Links

License

MIT

Project details


Release history Release notifications | RSS feed

This version

0.2.6

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cortexhub-0.2.6.tar.gz (12.9 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cortexhub-0.2.6-py3-none-any.whl (12.9 MB view details)

Uploaded Python 3

File details

Details for the file cortexhub-0.2.6.tar.gz.

File metadata

  • Download URL: cortexhub-0.2.6.tar.gz
  • Upload date:
  • Size: 12.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.8

File hashes

Hashes for cortexhub-0.2.6.tar.gz
Algorithm Hash digest
SHA256 3d00375bfb0489902254daebde5adc7e822bd4b75c16c62b2528983d98c4c778
MD5 39e505fa51199f5eb942db5a99a39527
BLAKE2b-256 eafbd6232b5360e83ce8b826f7e02b9bd9da7df12bed083669e4164e07974ab8

See more details on using hashes here.

File details

Details for the file cortexhub-0.2.6-py3-none-any.whl.

File metadata

  • Download URL: cortexhub-0.2.6-py3-none-any.whl
  • Upload date:
  • Size: 12.9 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.8

File hashes

Hashes for cortexhub-0.2.6-py3-none-any.whl
Algorithm Hash digest
SHA256 cde4588baad7298a40e00ccb70cec022e3d3ae557dc59e3bb921ec6b2af47057
MD5 e32837985f5d14ef6268a4c786e9880f
BLAKE2b-256 b9f54ddf7b6b4e9df8515f09b3fd71c55f7dff7b5a0e16c465071f71418a0649

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page