Skip to main content

Sentinos Python SDK (control-plane client + runtime governance adapters)

Project description

sentinos (Python SDK)

Sentinos

Sentinos is the control plane for AI agents: runtime governance, deterministic policy outcomes, and trace-backed forensics.

This package is the ergonomic Python wrapper over sentinos-sdk-core, exposing operator-first clients for:

  • Kernel (execution boundary, autonomy sessions, escalations, traces)
  • Arbiter (policy lifecycle + deterministic outcomes)
  • Chronos (context snapshots and provenance)
  • Alerts, incidents, marketplace, and supporting workflows

Install

pip install sentinos

Optional extras:

pip install "sentinos[providers]"  # openai + anthropic + boto3 (bedrock)
pip install "sentinos[otel]"       # OpenTelemetry helpers
pip install "sentinos[langchain]"  # LangChain integration helpers
pip install "sentinos[grpc]"       # grpcio + protobuf (native gRPC protocol smoke/integration)

Configure

The SDK supports a “single URL” setup by default:

export SENTINOS_BASE_URL="https://api.sentinos.ai"
export SENTINOS_ORG_ID="<org-id>"
export SENTINOS_ACCESS_TOKEN="<access-token>"

Notes:

  • SENTINOS_ORG_ID is preferred; SENTINOS_TENANT_ID remains supported as an alias.
  • If you run services on separate hosts, set SENTINOS_KERNEL_URL, SENTINOS_ARBITER_URL, SENTINOS_CHRONOS_URL.

Quickstart

Environment-driven:

from sentinos import SentinosClient

client = SentinosClient.from_env()
print(client.kernel.get_runtime_metrics())
print(client.arbiter.governance_dashboard())

Explicit constructor (use org_id; tenant_id is an alias):

from sentinos import SentinosClient
from sentinos.auth.jwt import JWTAuth

client = SentinosClient.simple(
    base_url="https://api.sentinos.ai",
    org_id="acme",
    auth=JWTAuth(lambda: "<access-token>"),
)
print(client.kernel.get_runtime_metrics())

Workforce Auth (Enterprise)

Enterprise workforce token exchange auth:

from sentinos import SentinosClient, WorkforceAssertion, WorkforceTokenProvider
from sentinos.auth.jwt import JWTAuth

workforce_provider = WorkforceTokenProvider.from_env(
    assertion_provider=lambda: WorkforceAssertion(
        external_subject="employee-123",
        email="employee@enterprise.example",
        groups=["AI_USERS"],
    ),
    idp_issuer="https://login.microsoftonline.com/tenant/v2.0",
)

client = SentinosClient(
    org_id="enterprise-org",
    base_url="https://api.sentinos.ai",
    auth=JWTAuth(workforce_provider),
)

Workforce token CLI bootstrap (helpful for enterprise workstation rollout and diagnostics):

sentinos-workforce-auth exchange \
  --controlplane-url "https://app.sentinoshq.com" \
  --org-id "<org-id>" \
  --idp-issuer "https://login.microsoftonline.com/<tenant>/v2.0" \
  --external-subject "<employee-sub>" \
  --assertion-token "<signed-idp-jwt>" \
  --audience "sentinos-workforce"

LLM / Agent Runtime Integration

Use LLMGuard when your application executes provider calls directly (OpenAI, Anthropic, LangChain tools, custom APIs) and you still want Sentinos policy decisions + decision traces for every interaction.

from sentinos import LLMGuard, SentinosClient

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-123")

result = guard.run(
    provider="openai",
    operation="chat.completions",
    model="gpt-4o-mini",
    request={"messages": [{"role": "user", "content": "Summarize this incident"}]},
    invoke=lambda: {"id": "resp-1", "model": "gpt-4o-mini"},
)
print(result.trace.trace_id, result.trace.decision)

For Open Responses-compatible providers/endpoints:

from sentinos import LLMGuard, SentinosClient, create_openresponses_adapter

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-openresponses-1")
adapter = create_openresponses_adapter(guard=guard, client=openai_client)
result = adapter.create(
    model="gpt-4.1-mini",
    input=[{"type": "message", "role": "user", "content": "summarize recent incidents"}],
)
print(result.trace.trace_id, result.trace.decision, result.response.status)

Drop-in adapter classes are also available when you already have provider client objects:

from sentinos import LLMGuard, OpenAIChatCompletionsAdapter, SentinosClient

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-123")
adapter = OpenAIChatCompletionsAdapter.from_client(guard=guard, client=openai_client)
result = adapter.create(model="gpt-4o-mini", messages=[{"role": "user", "content": "hello"}])
print(result.trace.trace_id, result.trace.decision)

Optional extras:

pip install "sentinos[providers]"   # openai + anthropic + boto3 (bedrock)
pip install "sentinos[bedrock]"     # boto3 only
pip install "sentinos[grpc]"        # grpcio + protobuf (native gRPC protocol smoke/integration)
pip install "sentinos[langchain]"   # langchain runtime integrations

Native Kernel gRPC example (non-Go interoperability):

export SENTINOS_GRPC_TARGET="<kernel-grpc-endpoint>"
export SENTINOS_ACCESS_TOKEN="<jwt-access-token>"
export SENTINOS_ORG_ID="<org-id>"
python examples/protocols/grpc_execute_smoke.py

Factory helpers for low-friction org onboarding:

from sentinos import LLMGuard, SentinosClient, create_openai_chat_adapter

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-123")
adapter = create_openai_chat_adapter(guard=guard, client=openai_client)
result = adapter.create(model="gpt-4o-mini", messages=[{"role": "user", "content": "hello"}])
print(result.trace.trace_id, result.trace.decision)

Helper for OpenAI-style chat.completions.create signatures:

from sentinos import LLMGuard, SentinosClient, guard_openai_chat_completion

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-123")

def fake_create(*, model, messages, temperature=0.2):
    return {"id": "chat-1", "model": model, "messages": messages}

result = guard_openai_chat_completion(
    guard=guard,
    create=fake_create,
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "hello"}],
)
print(result.response["id"])

OpenRouter factories (provider identity: openrouter):

from sentinos import LLMGuard, SentinosClient, create_openrouter_chat_adapter

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-openrouter-1")
adapter = create_openrouter_chat_adapter(
    guard=guard,
    api_key="<OPENROUTER_API_KEY>",
    http_referer="https://console.example.com",
    x_title="Sentinos Console",
)
result = adapter.create(model="openai/gpt-4.1-mini", messages=[{"role": "user", "content": "hello"}])
print(result.trace.trace_id, result.trace.decision)

Amazon Bedrock native Converse adapters (provider identity: bedrock):

from sentinos import LLMGuard, SentinosClient, create_bedrock_converse_adapter

client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-bedrock-1")
adapter = create_bedrock_converse_adapter(guard=guard, region_name="us-east-1")
result = adapter.converse(
    model_id="anthropic.claude-3-5-sonnet-20240620-v1:0",
    messages=[{"role": "user", "content": [{"text": "summarize incidents"}]}],
)
print(result.trace.trace_id, result.trace.decision)

Reference

Trace Forensics and Retention Example

from sentinos import SentinosClient

client = SentinosClient.from_env(org_id="acme")
trace_id = "11111111-1111-1111-1111-111111111111"

ledger = client.traces.ledger_verify(trace_id)
replay = client.traces.replay_trace(trace_id, request={"include_explain": True})
retention = client.traces.get_retention_policy()
dry_run = client.traces.enforce_retention(request={"dry_run": True})
distributed = client.traces.distributed_trace_summaries(limit=25)

print(ledger.verified, replay.drift_detected, retention.trace_days, dry_run.traces_affected, len(distributed))

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sentinos-0.1.8.tar.gz (1.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sentinos-0.1.8-py3-none-any.whl (79.3 kB view details)

Uploaded Python 3

File details

Details for the file sentinos-0.1.8.tar.gz.

File metadata

  • Download URL: sentinos-0.1.8.tar.gz
  • Upload date:
  • Size: 1.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sentinos-0.1.8.tar.gz
Algorithm Hash digest
SHA256 ec4b7633c547998c82517a2abfb1949108dfa7405d74808b6a1e53372f093de9
MD5 ed4e5f0620ced2e0159ba87f112b61c4
BLAKE2b-256 e6e55f6a885e43c6e1bc8f0c63f3929104732c1c9778d6f0c295dba8a4ab872a

See more details on using hashes here.

Provenance

The following attestation bundles were made for sentinos-0.1.8.tar.gz:

Publisher: publish.yml on SentinosHQ/sentinos-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file sentinos-0.1.8-py3-none-any.whl.

File metadata

  • Download URL: sentinos-0.1.8-py3-none-any.whl
  • Upload date:
  • Size: 79.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sentinos-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 e55e6e85808b441718d34522dc184a2e15ba49678fb2df3dea2eff0b141128c2
MD5 6c2d2f5d51fe936589746d0dbe01a467
BLAKE2b-256 9082130b1fd57171a58e7e33174f77af4328005dfdb5e00175e9d0a3e4f00675

See more details on using hashes here.

Provenance

The following attestation bundles were made for sentinos-0.1.8-py3-none-any.whl:

Publisher: publish.yml on SentinosHQ/sentinos-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page