Skip to main content

Sentinos Python SDK (control-plane client + runtime governance adapters)

Project description

sentinos (Python SDK)

Sentinos

Sentinos is the control plane for AI agents: runtime governance, deterministic policy outcomes, and trace-backed forensics.

This package is the ergonomic Python wrapper over sentinos-sdk-core, exposing operator-first clients for:

  • Kernel (execution boundary, autonomy sessions, escalations, traces)
  • Arbiter (policy lifecycle + deterministic outcomes)
  • Chronos (context snapshots and provenance)
  • Alerts, incidents, marketplace, and supporting workflows

Install

pip install sentinos

Optional extras:

pip install "sentinos[providers]"  # openai + anthropic + boto3 (bedrock)
pip install "sentinos[otel]"       # OpenTelemetry helpers
pip install "sentinos[langchain]"  # LangChain integration helpers
pip install "sentinos[grpc]"       # grpcio + protobuf (native gRPC protocol smoke/integration)

Configure

The SDK supports a “single URL” setup by default:

export SENTINOS_BASE_URL="https://api.sentinos.ai"
export SENTINOS_ORG_ID="<org-id>"
export SENTINOS_ACCESS_TOKEN="<access-token>"

Notes:

  • SENTINOS_ORG_ID is preferred; SENTINOS_TENANT_ID remains supported as an alias.
  • If you run services on separate hosts, set SENTINOS_KERNEL_URL, SENTINOS_ARBITER_URL, SENTINOS_CHRONOS_URL.

Quickstart

Environment-driven:

from sentinos import SentinosClient

client = SentinosClient.from_env()
print(client.kernel.get_runtime_metrics())
print(client.arbiter.governance_dashboard())

Explicit constructor (use org_id; tenant_id is an alias):

from sentinos import SentinosClient
from sentinos.auth.jwt import JWTAuth

client = SentinosClient.simple(
    base_url="https://api.sentinos.ai",
    org_id="acme",
    auth=JWTAuth(lambda: "<access-token>"),
)
print(client.kernel.get_runtime_metrics())

Workforce Auth (Enterprise)

Enterprise workforce token exchange auth:

from sentinos import SentinosClient, WorkforceAssertion, WorkforceTokenProvider
from sentinos.auth.jwt import JWTAuth

workforce_provider = WorkforceTokenProvider.from_env(
    assertion_provider=lambda: WorkforceAssertion(
        external_subject="employee-123",
        email="employee@enterprise.example",
        groups=["AI_USERS"],
    ),
    idp_issuer="https://login.microsoftonline.com/tenant/v2.0",
)

client = SentinosClient(
    org_id="enterprise-org",
    base_url="https://api.sentinos.ai",
    auth=JWTAuth(workforce_provider),
)

Workforce token CLI bootstrap (helpful for enterprise workstation rollout and diagnostics):

sentinos-workforce-auth exchange \
  --controlplane-url "https://app.sentinoshq.com" \
  --org-id "<org-id>" \
  --idp-issuer "https://login.microsoftonline.com/<tenant>/v2.0" \
  --external-subject "<employee-sub>" \
  --assertion-token "<signed-idp-jwt>" \
  --audience "sentinos-workforce"

LLM / Agent Runtime Integration

Use LLMGuard when your application executes provider calls directly (OpenAI, Anthropic, LangChain tools, custom APIs) and you still want Sentinos policy decisions + decision traces for every interaction.

from sentinos import LLMGuard, SentinosClient

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-123")

result = guard.run(
    provider="openai",
    operation="chat.completions",
    model="gpt-4o-mini",
    request={"messages": [{"role": "user", "content": "Summarize this incident"}]},
    invoke=lambda: {"id": "resp-1", "model": "gpt-4o-mini"},
)
print(result.trace.trace_id, result.trace.decision)

For Open Responses-compatible providers/endpoints:

from sentinos import LLMGuard, SentinosClient, create_openresponses_adapter

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-openresponses-1")
adapter = create_openresponses_adapter(guard=guard, client=openai_client)
result = adapter.create(
    model="gpt-4.1-mini",
    input=[{"type": "message", "role": "user", "content": "summarize recent incidents"}],
)
print(result.trace.trace_id, result.trace.decision, result.response.status)

Drop-in adapter classes are also available when you already have provider client objects:

from sentinos import LLMGuard, OpenAIChatCompletionsAdapter, SentinosClient

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-123")
adapter = OpenAIChatCompletionsAdapter.from_client(guard=guard, client=openai_client)
result = adapter.create(model="gpt-4o-mini", messages=[{"role": "user", "content": "hello"}])
print(result.trace.trace_id, result.trace.decision)

Optional extras:

pip install "sentinos[providers]"   # openai + anthropic + boto3 (bedrock)
pip install "sentinos[bedrock]"     # boto3 only
pip install "sentinos[grpc]"        # grpcio + protobuf (native gRPC protocol smoke/integration)
pip install "sentinos[langchain]"   # langchain runtime integrations

Native Kernel gRPC example (non-Go interoperability):

export SENTINOS_GRPC_TARGET="localhost:9091"
export SENTINOS_ACCESS_TOKEN="<jwt-access-token>"
export SENTINOS_ORG_ID="<org-id>"
python examples/protocols/grpc_execute_smoke.py

Factory helpers for low-friction org onboarding:

from sentinos import LLMGuard, SentinosClient, create_openai_chat_adapter

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-123")
adapter = create_openai_chat_adapter(guard=guard, client=openai_client)
result = adapter.create(model="gpt-4o-mini", messages=[{"role": "user", "content": "hello"}])
print(result.trace.trace_id, result.trace.decision)

Helper for OpenAI-style chat.completions.create signatures:

from sentinos import LLMGuard, SentinosClient, guard_openai_chat_completion

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-123")

def fake_create(*, model, messages, temperature=0.2):
    return {"id": "chat-1", "model": model, "messages": messages}

result = guard_openai_chat_completion(
    guard=guard,
    create=fake_create,
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "hello"}],
)
print(result.response["id"])

OpenRouter factories (provider identity: openrouter):

from sentinos import LLMGuard, SentinosClient, create_openrouter_chat_adapter

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-openrouter-1")
adapter = create_openrouter_chat_adapter(
    guard=guard,
    api_key="<OPENROUTER_API_KEY>",
    http_referer="https://console.example.com",
    x_title="Sentinos Console",
)
result = adapter.create(model="openai/gpt-4.1-mini", messages=[{"role": "user", "content": "hello"}])
print(result.trace.trace_id, result.trace.decision)

Amazon Bedrock native Converse adapters (provider identity: bedrock):

from sentinos import LLMGuard, SentinosClient, create_bedrock_converse_adapter

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-bedrock-1")
adapter = create_bedrock_converse_adapter(guard=guard, region_name="us-east-1")
result = adapter.converse(
    model_id="anthropic.claude-3-5-sonnet-20240620-v1:0",
    messages=[{"role": "user", "content": [{"text": "summarize incidents"}]}],
)
print(result.trace.trace_id, result.trace.decision)

Reference

Trace Forensics and Retention Example

from sentinos import SentinosClient

client = SentinosClient.from_env(org_id="acme")
trace_id = "11111111-1111-1111-1111-111111111111"

ledger = client.traces.ledger_verify(trace_id)
replay = client.traces.replay_trace(trace_id, request={"include_explain": True})
retention = client.traces.get_retention_policy()
dry_run = client.traces.enforce_retention(request={"dry_run": True})
distributed = client.traces.distributed_trace_summaries(limit=25)

print(ledger.verified, replay.drift_detected, retention.trace_days, dry_run.traces_affected, len(distributed))

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sentinos-0.1.7.tar.gz (1.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sentinos-0.1.7-py3-none-any.whl (79.3 kB view details)

Uploaded Python 3

File details

Details for the file sentinos-0.1.7.tar.gz.

File metadata

  • Download URL: sentinos-0.1.7.tar.gz
  • Upload date:
  • Size: 1.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sentinos-0.1.7.tar.gz
Algorithm Hash digest
SHA256 7a4a74b10664a3381062453c57784724b9437f4c1f475d388452fa820d98eae8
MD5 2938791512061d7a5cdb1f69d207e387
BLAKE2b-256 c73a332b0438ebd3ea6d6684c5be068b09fb1a3e120b1f75e9de82f2efe42b1c

See more details on using hashes here.

Provenance

The following attestation bundles were made for sentinos-0.1.7.tar.gz:

Publisher: publish.yml on SentinosHQ/sentinos-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file sentinos-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: sentinos-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 79.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sentinos-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 d82251d0bb3fd8a154ba6cab7dfe74b71a5b060d93245ba033862d4e4a88ba21
MD5 58f07001fedc5e00ef3cdcf317672306
BLAKE2b-256 71e7d72de6d46951db3bd36b17aa37039c8920f438de89d5194a3f9b0004415e

See more details on using hashes here.

Provenance

The following attestation bundles were made for sentinos-0.1.7-py3-none-any.whl:

Publisher: publish.yml on SentinosHQ/sentinos-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page