Skip to main content

Sentinos Python SDK (control-plane client + runtime governance adapters)

Project description

sentinos (Python SDK)

Sentinos

Sentinos

Sentinos is the control plane for AI agents: runtime governance, deterministic policy outcomes, and trace-backed forensics.

This package is the ergonomic Python wrapper over sentinos-sdk-core, exposing operator-first clients for:

  • Kernel (execution boundary, autonomy sessions, escalations, traces)
  • Arbiter (policy lifecycle + deterministic outcomes)
  • Chronos (context snapshots and provenance)
  • Alerts, incidents, marketplace, and supporting workflows

Install

pip install sentinos

Optional extras:

pip install "sentinos[providers]"  # openai + anthropic + boto3 (bedrock)
pip install "sentinos[otel]"       # OpenTelemetry helpers
pip install "sentinos[langchain]"  # LangChain integration helpers
pip install "sentinos[grpc]"       # grpcio + protobuf (native gRPC protocol smoke/integration)

Configure

The SDK supports a “single URL” setup by default:

export SENTINOS_BASE_URL="https://api.sentinos.ai"
export SENTINOS_ORG_ID="<org-id>"
export SENTINOS_ACCESS_TOKEN="<access-token>"

Notes:

  • SENTINOS_ORG_ID is preferred; SENTINOS_TENANT_ID remains supported as an alias.
  • If you run services on separate hosts, set SENTINOS_KERNEL_URL, SENTINOS_ARBITER_URL, SENTINOS_CHRONOS_URL.

Quickstart

Environment-driven:

from sentinos import SentinosClient

client = SentinosClient.from_env()
print(client.kernel.get_runtime_metrics())
print(client.arbiter.governance_dashboard())

Explicit constructor (use org_id; tenant_id is an alias):

from sentinos import SentinosClient
from sentinos.auth.jwt import JWTAuth

client = SentinosClient.simple(
    base_url="https://api.sentinos.ai",
    org_id="acme",
    auth=JWTAuth(lambda: "<access-token>"),
)
print(client.kernel.get_runtime_metrics())

Local Development

Standalone SDK repo:

python3 -m venv .venv && source .venv/bin/activate
python -m pip install --upgrade pip
pip install -e .[dev]
tox -q

Monorepo development (when ../sdk-core/python exists):

python3 -m venv .venv && source .venv/bin/activate
python -m pip install --upgrade pip
pip install -e ../sdk-core/python
pip install -e .[dev]
tox -q

Workforce Auth (Enterprise)

Enterprise workforce token exchange auth:

from sentinos import SentinosClient, WorkforceAssertion, WorkforceTokenProvider
from sentinos.auth.jwt import JWTAuth

workforce_provider = WorkforceTokenProvider.from_env(
    assertion_provider=lambda: WorkforceAssertion(
        external_subject="employee-123",
        email="employee@enterprise.example",
        groups=["AI_USERS"],
    ),
    idp_issuer="https://login.microsoftonline.com/tenant/v2.0",
)

client = SentinosClient(
    org_id="enterprise-org",
    base_url="https://api.sentinos.ai",
    auth=JWTAuth(workforce_provider),
)

Workforce token CLI bootstrap (helpful for enterprise workstation rollout and diagnostics):

sentinos-workforce-auth exchange \
  --controlplane-url "https://app.sentinoshq.com" \
  --org-id "<org-id>" \
  --idp-issuer "https://login.microsoftonline.com/<tenant>/v2.0" \
  --external-subject "<employee-sub>" \
  --assertion-token "<signed-idp-jwt>" \
  --audience "sentinos-workforce"

LLM / Agent Runtime Integration

Use LLMGuard when your application executes provider calls directly (OpenAI, Anthropic, LangChain tools, custom APIs) and you still want Sentinos policy decisions + decision traces for every interaction.

from sentinos import LLMGuard, SentinosClient

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-123")

result = guard.run(
    provider="openai",
    operation="chat.completions",
    model="gpt-4o-mini",
    request={"messages": [{"role": "user", "content": "Summarize this incident"}]},
    invoke=lambda: {"id": "resp-1", "model": "gpt-4o-mini"},
)
print(result.trace.trace_id, result.trace.decision)

For Open Responses-compatible providers/endpoints:

from sentinos import LLMGuard, SentinosClient, create_openresponses_adapter

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-openresponses-1")
adapter = create_openresponses_adapter(guard=guard, client=openai_client)
result = adapter.create(
    model="gpt-4.1-mini",
    input=[{"type": "message", "role": "user", "content": "summarize recent incidents"}],
)
print(result.trace.trace_id, result.trace.decision, result.response.status)

Drop-in adapter classes are also available when you already have provider client objects:

from sentinos import LLMGuard, OpenAIChatCompletionsAdapter, SentinosClient

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-123")
adapter = OpenAIChatCompletionsAdapter.from_client(guard=guard, client=openai_client)
result = adapter.create(model="gpt-4o-mini", messages=[{"role": "user", "content": "hello"}])
print(result.trace.trace_id, result.trace.decision)

Optional extras:

pip install "sentinos[providers]"   # openai + anthropic + boto3 (bedrock)
pip install "sentinos[bedrock]"     # boto3 only
pip install "sentinos[grpc]"        # grpcio + protobuf (native gRPC protocol smoke/integration)
pip install "sentinos[langchain]"   # langchain runtime integrations

Native Kernel gRPC protocol smoke example (non-Go interoperability):

export SENTINOS_GRPC_TARGET="localhost:9091"
export SENTINOS_ACCESS_TOKEN="<jwt-access-token>"
export SENTINOS_ORG_ID="<org-id>"
python examples/protocols/grpc_execute_smoke.py

Live end-to-end OpenAI governance suite (real traffic + alerts/incidents/traces/evidence):

export SENTINOS_E2E_AUTH_MODE=token
export SENTINOS_ORG_ID="<org-id>"
export SENTINOS_ACCESS_TOKEN="<jwt-access-token>"
export OPENAI_API_KEY="<openai-key>"
python examples/live_e2e/run_full_live_e2e.py

Suite details:

  • examples/live_e2e/README.md
  • examples/live_e2e/stage_00_bootstrap_account.py
  • examples/live_e2e/stage_01_setup.py
  • examples/live_e2e/stage_02_openai_traffic.py
  • examples/live_e2e/stage_03_triage.py
  • examples/live_e2e/stage_04_verify.py

Factory helpers for low-friction org onboarding:

from sentinos import LLMGuard, SentinosClient, create_openai_chat_adapter

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-123")
adapter = create_openai_chat_adapter(guard=guard, client=openai_client)
result = adapter.create(model="gpt-4o-mini", messages=[{"role": "user", "content": "hello"}])
print(result.trace.trace_id, result.trace.decision)

Helper for OpenAI-style chat.completions.create signatures:

from sentinos import LLMGuard, SentinosClient, guard_openai_chat_completion

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-123")

def fake_create(*, model, messages, temperature=0.2):
    return {"id": "chat-1", "model": model, "messages": messages}

result = guard_openai_chat_completion(
    guard=guard,
    create=fake_create,
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "hello"}],
)
print(result.response["id"])

OpenRouter factories (provider identity: openrouter):

from sentinos import LLMGuard, SentinosClient, create_openrouter_chat_adapter

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-openrouter-1")
adapter = create_openrouter_chat_adapter(
    guard=guard,
    api_key="<OPENROUTER_API_KEY>",
    http_referer="https://console.example.com",
    x_title="Sentinos Console",
)
result = adapter.create(model="openai/gpt-4.1-mini", messages=[{"role": "user", "content": "hello"}])
print(result.trace.trace_id, result.trace.decision)

Amazon Bedrock native Converse adapters (provider identity: bedrock):

from sentinos import LLMGuard, SentinosClient, create_bedrock_converse_adapter

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-bedrock-1")
adapter = create_bedrock_converse_adapter(guard=guard, region_name="us-east-1")
result = adapter.converse(
    model_id="anthropic.claude-3-5-sonnet-20240620-v1:0",
    messages=[{"role": "user", "content": [{"text": "summarize incidents"}]}],
)
print(result.trace.trace_id, result.trace.decision)

Quality Gates

Local package checks:

tox -q

Monorepo-only checks (if you have the full Sentinos repo):

scripts/quality/check_python_sdk_quality.sh
scripts/quality/check_sdk_docs_examples.sh
scripts/quality/check_sdk_parity_matrix.py

Reference

Trace Forensics and Retention Example

from sentinos import SentinosClient

client = SentinosClient.from_env(org_id="acme")
trace_id = "11111111-1111-1111-1111-111111111111"

ledger = client.traces.ledger_verify(trace_id)
replay = client.traces.replay_trace(trace_id, request={"include_explain": True})
retention = client.traces.get_retention_policy()
dry_run = client.traces.enforce_retention(request={"dry_run": True})
distributed = client.traces.distributed_trace_summaries(limit=25)

print(ledger.verified, replay.drift_detected, retention.trace_days, dry_run.traces_affected, len(distributed))

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sentinos-0.1.6.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sentinos-0.1.6-py3-none-any.whl (79.8 kB view details)

Uploaded Python 3

File details

Details for the file sentinos-0.1.6.tar.gz.

File metadata

  • Download URL: sentinos-0.1.6.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sentinos-0.1.6.tar.gz
Algorithm Hash digest
SHA256 9853389d8a2e1e49f27a281a6b128fe807e266db775ef5086e8e9040e6489534
MD5 2b584536994ce06c5a7a013d0c4116cd
BLAKE2b-256 3f924ceedef35a3d46a45b857720c52ed6cd419c9429c3c4a6713e2f881bca6c

See more details on using hashes here.

Provenance

The following attestation bundles were made for sentinos-0.1.6.tar.gz:

Publisher: publish.yml on SentinosHQ/sentinos-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file sentinos-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: sentinos-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 79.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sentinos-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 58af0af513e3b8c3857264d647ae84e1ce2b080226c3ab907c570e24000e5cb9
MD5 d063e0d354d279a072e6d38a346ee20d
BLAKE2b-256 161c86260592a9d68436cfe36a165f09e1b981927ca0a67894de24def30505fa

See more details on using hashes here.

Provenance

The following attestation bundles were made for sentinos-0.1.6-py3-none-any.whl:

Publisher: publish.yml on SentinosHQ/sentinos-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page