Skip to main content

Official Python SDK for the Execlave AI Governance Platform — tracing, policy enforcement, and framework auto-instrumentation for LangChain, OpenAI Agents SDK, and CrewAI

Project description

Execlave Python SDK

Official Python SDK for the Execlave AI Governance Platform. Provides pre-execution policy enforcement, tracing, agent registration, PII scrubbing, kill-switch support, and OpenTelemetry export.

PyPI version Python 3.11+ Downloads License: MIT Docs

Framework integrations — drop in one callback/processor/helper for LangChain, OpenAI Agents SDK, or CrewAI. See the full docs or get an API key.


Installation

pip install execlave-sdk

With OpenTelemetry support:

pip install execlave-sdk[otel]

Quick Start

The canonical request lifecycle is register → enforce → call LLM → trace. enforce_policy is what blocks bad requests; tracing alone only logs them after the fact.

from execlave import Execlave, PolicyBlockedError, AgentPausedError

ag = Execlave(
    api_key="exe_prod_your_key_here",     # or set EXECLAVE_API_KEY env var
    base_url="https://api.execlave.com",
    environment="production",
)

# Register the agent once on startup (idempotent)
agent = ag.register_agent(
    agent_id="my-assistant",
    name="Customer Support Bot",
    description="Handles tier-1 support queries",
    type="chatbot",
    platform="custom",
    tags=["support", "production"],
)


def answer(question: str) -> str:
    trace = ag.start_trace(agent_id="my-assistant")
    trace.set_input(question)

    try:
        # Pre-execution policy enforcement. Synchronously checks every
        # policy you've configured for this agent. Raises PolicyBlockedError
        # if any policy with enforcement_mode='block' fires.
        ag.enforce_policy(agent_id="my-assistant", input=question)
    except PolicyBlockedError as e:
        trace.finish(status="error", error_type="PolicyBlockedError", error_message=str(e))
        return "Request blocked by security policy."
    except AgentPausedError:
        trace.finish(status="error", error_type="AgentPausedError")
        return "Service temporarily unavailable."

    response = llm.invoke(question)               # your LLM call
    trace.set_output(response).set_model("gpt-4").finish()
    return response


print(answer("How do I reset my password?"))

Agent Registration

Register agents to monitor them from the Execlave dashboard:

agent = ag.register_agent(
    agent_id="order-processor",
    name="Order Processor",
    description="Processes and validates customer orders",
    type="autonomous",        # chatbot | copilot | autonomous | workflow
    platform="custom",        # custom | openai | anthropic | langchain | ...
    tags=["orders", "production"],
)

# Check agent status (active / paused / etc.)
print(agent.status)

Policy Enforcement

enforce_policy is a synchronous check against the policies you've configured in the dashboard. Call it before every LLM or tool invocation. Behavior depends on each policy's enforcement_mode:

Mode What enforce_policy does
block Raises PolicyBlockedError (with the violations list)
monitor / warn Returns {"allowed": True, "warnings": [...]} — caller proceeds
require_approval Blocks the call while polling for human approval (returns when granted)
from execlave import PolicyBlockedError

try:
    result = ag.enforce_policy(
        agent_id="my-assistant",
        input=user_message,
        environment="production",        # optional
        metadata={"user_id": "u123"},    # optional
        estimated_cost=0.02,             # optional — for cost_limit policies
        tools=["search", "email"],       # optional — for access_control policies
    )
    # result["allowed"] is True. Check result.get("warnings") for non-blocking signals.
except PolicyBlockedError as e:
    for v in e.violations:
        print(v["policyType"], v["message"])

Important: A policy must be configured with enforcement_mode = block in the dashboard to actually block. Policies in monitor or warn mode produce warnings on the result but never raise.

Tracing

Decorator

The simplest way to trace function calls:

@ag.trace
def process_order(order_data: dict) -> dict:
    result = llm.invoke(json.dumps(order_data))
    return json.loads(result)

@ag.trace only records the call — it does not run policy enforcement. To block bad inputs, call ag.enforce_policy(...) inside the function body before invoking the LLM.

Context Manager

For more control over trace metadata:

with ag.start_trace(agent_id="my-assistant", session_id="sess_abc") as trace:
    trace.set_input({"question": "What is the refund policy?"})

    result = llm.invoke("What is the refund policy?")

    trace.set_output({"answer": result})
    trace.set_model("gpt-4o")
    trace.set_tokens(input=150, output=320)
    trace.set_cost(0.0045)

Manual Trace

trace = ag.start_trace(agent_id="my-assistant")
trace.set_input(user_query)

try:
    response = llm.invoke(user_query)
    trace.set_output(response)
    trace.finish(status="success")
except Exception as e:
    trace.finish(status="error", error_message=str(e), error_type=type(e).__name__)
    raise

Trace Fields

Method Description
set_input(data) Input data (auto-serialized)
set_output(data) Output data (auto-serialized)
set_model(name) Model name (e.g., "gpt-4o")
set_tokens(input, output) Token counts
set_cost(amount) Cost in USD
set_duration(ms) Override auto-calculated duration (ms)
add_metadata(dict) Merge additional metadata
add_tags(list) Append tags (deduplicated)
finish(status, error_message, error_type) Finalize trace and submit to the flush queue

status values: "success" (default), "error", "timeout". All setter methods are chainable.

Privacy & PII Scrubbing

Built-in client-side PII scrubbing before data leaves your infrastructure:

ag = Execlave(
    api_key="exe_prod_xxx",
    privacy={
        "enabled": True,                       # turn the feature on
        "scrub_fields": ["input", "output"],   # fields to scan
        "hash_pii": True,                      # include short SHA-256 hashes in metadata
    },
)

Detected PII types: email addresses, SSNs, credit card numbers, US phone numbers, IP addresses, API keys.

Client-side Injection Scoring

When enable_injection_scan=True (the default), the SDK runs a regex-based prompt-injection check on the trace's input and annotates the trace with the detected risk level and matched patterns:

ag = Execlave(
    api_key="exe_prod_xxx",
    enable_injection_scan=True,
)

The scan attaches a metadata.injection_scan block to the trace (with risk_level and patterns_matched) so detections show up in the dashboard.

This option does not block LLM calls — it is a tagging/telemetry feature. To actually prevent execution when injection is detected, configure an injection_scan policy with enforcement_mode = block in the dashboard and call ag.enforce_policy(...) before your LLM call. See Policy Enforcement.

Detected patterns include "ignore previous instructions", jailbreak attempts, system-prompt extraction, and other common prefixes.

Kill Switch / Pause Support

Execlave supports remote agent pausing via the dashboard. Once paused, every new trace or enforce call raises AgentPausedError:

from execlave import AgentPausedError

try:
    result = answer("Process this order")
except AgentPausedError:
    return "Service temporarily unavailable — agent paused by admin."

The SDK polls for status changes in the background (configurable interval) and connects via Socket.IO when available for sub-second propagation.

OpenTelemetry Integration

Export Execlave traces as OpenTelemetry spans for unified observability:

from execlave import Execlave

ag = Execlave(
    api_key="exe_prod_xxx",
    mode="otlp",
    otlp_endpoint="http://localhost:4318",   # your OTel collector
)

Requires the otel extra: pip install execlave-sdk[otel].

Configuration

Constructor Options

Parameter Type Default Description
api_key str EXECLAVE_API_KEY env Your Execlave API key
base_url str https://api.execlave.com Execlave API URL
environment str "production" Environment tag
async_mode bool True Non-blocking trace ingestion
mode str "native" "native" or "otlp"
otlp_endpoint str None OTel collector endpoint (required when mode="otlp")
batch_size int 100 Traces per flush batch
flush_interval_seconds int 10 Seconds between background flushes
debug bool False Enable debug logging
privacy dict {} PII scrubbing config (see Privacy section)
enable_control_channel bool True Enable kill-switch polling + WebSocket
enable_injection_scan bool True Tag traces with client-side injection signals (no block)
enforcement_on_outage str "fail_open" "fail_open" allows requests when API is down; "fail_closed" raises EnforcementUnavailableError
policy_cache_ttl_seconds int 60 TTL for cached policy decisions

Environment Variables

Variable Description
EXECLAVE_API_KEY API key (alternative to constructor)
EXECLAVE_BASE_URL Base URL (alternative to constructor)

Error Handling

from execlave import (
    ExeclaveError,
    ExeclaveAuthError,
    PolicyBlockedError,
    AgentPausedError,
    EnforcementUnavailableError,
)

try:
    ag.enforce_policy(agent_id="my-assistant", input=user_message)
    # ... LLM call + tracing ...
except PolicyBlockedError as e:
    # A block-mode policy fired. e.violations is a list of dicts with policyType, message, severity.
    return "Blocked by security policy."
except AgentPausedError:
    # Agent paused via kill switch.
    return "Service temporarily unavailable."
except EnforcementUnavailableError:
    # Only raised when enforcement_on_outage='fail_closed' AND the API
    # is unreachable for 3+ consecutive attempts (circuit breaker open).
    return "Governance system unavailable."
except ExeclaveAuthError:
    raise  # Misconfigured API key — fail loud.
except ExeclaveError as e:
    print(f"SDK error: {e}")
    raise

Async Trace Buffer

The SDK uses a non-blocking circular buffer (max 10,000 traces) with a background flush thread. Traces are batched and sent to the Execlave API automatically.

# Manual flush (e.g., before shutdown)
ag.flush()

# Graceful shutdown — flushes remaining traces and joins background threads
ag.shutdown()

Development

# Clone the repo
git clone https://github.com/execlave/sdk-python.git
cd execlave/sdk-python

# Install dev dependencies
pip install -e ".[test]"

# Run tests
pytest                    # 130 tests
pytest --cov=execlave     # With coverage

# Type checking
mypy execlave/

Legal

By using this SDK, you agree to the Execlave Terms of Service.

License

MIT — see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

execlave_sdk-1.1.2.tar.gz (45.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

execlave_sdk-1.1.2-py3-none-any.whl (44.1 kB view details)

Uploaded Python 3

File details

Details for the file execlave_sdk-1.1.2.tar.gz.

File metadata

  • Download URL: execlave_sdk-1.1.2.tar.gz
  • Upload date:
  • Size: 45.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for execlave_sdk-1.1.2.tar.gz
Algorithm Hash digest
SHA256 1fb42ad26771b20c80998b6e2f939f1eb40201d8f4af7268b5850bc13ef0dc55
MD5 ae677fc5f7db2d8c58dbd258210bb585
BLAKE2b-256 accb67e24cf9cb144e3db60314d7c4783f607904bcbc8126a2cd69357afa9889

See more details on using hashes here.

Provenance

The following attestation bundles were made for execlave_sdk-1.1.2.tar.gz:

Publisher: sdk-publish.yml on rishitmavani/execlave

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file execlave_sdk-1.1.2-py3-none-any.whl.

File metadata

  • Download URL: execlave_sdk-1.1.2-py3-none-any.whl
  • Upload date:
  • Size: 44.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for execlave_sdk-1.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 69fc1421a81b038f1cf0f5528eb14f2cbd29a7746f910547ac35d1d8b1d0d290
MD5 bde47de9da6642c66819e698295f5bc4
BLAKE2b-256 35bec9a9fcf38d5d142bdb3cd583400fee94b249a0895e22a823183624de7791

See more details on using hashes here.

Provenance

The following attestation bundles were made for execlave_sdk-1.1.2-py3-none-any.whl:

Publisher: sdk-publish.yml on rishitmavani/execlave

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page