Sentinos Python SDK (control-plane client + runtime governance adapters)
Project description
sentinos (Python SDK)
Sentinos is the control plane for AI agents: runtime governance, deterministic policy outcomes, and trace-backed forensics.
This package is the ergonomic Python wrapper over sentinos-sdk-core, exposing operator-first clients for:
- Kernel (execution boundary, autonomy sessions, escalations, traces)
- Arbiter (policy lifecycle + deterministic outcomes)
- Chronos (context snapshots and provenance)
- Alerts, incidents, marketplace, and supporting workflows
Install
pip install sentinos
Optional extras:
pip install "sentinos[providers]" # openai + anthropic + boto3 (bedrock)
pip install "sentinos[otel]" # OpenTelemetry helpers
pip install "sentinos[langchain]" # LangChain integration helpers
pip install "sentinos[grpc]" # grpcio + protobuf (native gRPC protocol smoke/integration)
Configure
The SDK supports a “single URL” setup by default:
export SENTINOS_BASE_URL="https://api.sentinos.ai"
export SENTINOS_ORG_ID="<org-id>"
export SENTINOS_ACCESS_TOKEN="<access-token>"
Notes:
SENTINOS_ORG_IDis preferred;SENTINOS_TENANT_IDremains supported as an alias.- If you run services on separate hosts, set
SENTINOS_KERNEL_URL,SENTINOS_ARBITER_URL,SENTINOS_CHRONOS_URL.
Quickstart
Environment-driven:
from sentinos import SentinosClient
client = SentinosClient.from_env()
print(client.kernel.get_runtime_metrics())
print(client.arbiter.governance_dashboard())
Explicit constructor (use org_id; tenant_id is an alias):
from sentinos import SentinosClient
from sentinos.auth.jwt import JWTAuth
client = SentinosClient.simple(
base_url="https://api.sentinos.ai",
org_id="acme",
auth=JWTAuth(lambda: "<access-token>"),
)
print(client.kernel.get_runtime_metrics())
Local Development
Standalone SDK repo:
python3 -m venv .venv && source .venv/bin/activate
python -m pip install --upgrade pip
pip install -e .[dev]
tox -q
Monorepo development (when ../sdk-core/python exists):
python3 -m venv .venv && source .venv/bin/activate
python -m pip install --upgrade pip
pip install -e ../sdk-core/python
pip install -e .[dev]
tox -q
Workforce Auth (Enterprise)
Enterprise workforce token exchange auth:
from sentinos import SentinosClient, WorkforceAssertion, WorkforceTokenProvider
from sentinos.auth.jwt import JWTAuth
workforce_provider = WorkforceTokenProvider.from_env(
assertion_provider=lambda: WorkforceAssertion(
external_subject="employee-123",
email="employee@enterprise.example",
groups=["AI_USERS"],
),
idp_issuer="https://login.microsoftonline.com/tenant/v2.0",
)
client = SentinosClient(
org_id="enterprise-org",
base_url="https://api.sentinos.ai",
auth=JWTAuth(workforce_provider),
)
Workforce token CLI bootstrap (helpful for enterprise workstation rollout and diagnostics):
sentinos-workforce-auth exchange \
--controlplane-url "https://app.sentinoshq.com" \
--org-id "<org-id>" \
--idp-issuer "https://login.microsoftonline.com/<tenant>/v2.0" \
--external-subject "<employee-sub>" \
--assertion-token "<signed-idp-jwt>" \
--audience "sentinos-workforce"
LLM / Agent Runtime Integration
Use LLMGuard when your application executes provider calls directly (OpenAI, Anthropic, LangChain tools, custom APIs)
and you still want Sentinos policy decisions + decision traces for every interaction.
from sentinos import LLMGuard, SentinosClient
client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-123")
result = guard.run(
provider="openai",
operation="chat.completions",
model="gpt-4o-mini",
request={"messages": [{"role": "user", "content": "Summarize this incident"}]},
invoke=lambda: {"id": "resp-1", "model": "gpt-4o-mini"},
)
print(result.trace.trace_id, result.trace.decision)
For Open Responses-compatible providers/endpoints:
from sentinos import LLMGuard, SentinosClient, create_openresponses_adapter
client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-openresponses-1")
adapter = create_openresponses_adapter(guard=guard, client=openai_client)
result = adapter.create(
model="gpt-4.1-mini",
input=[{"type": "message", "role": "user", "content": "summarize recent incidents"}],
)
print(result.trace.trace_id, result.trace.decision, result.response.status)
Drop-in adapter classes are also available when you already have provider client objects:
from sentinos import LLMGuard, OpenAIChatCompletionsAdapter, SentinosClient
client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-123")
adapter = OpenAIChatCompletionsAdapter.from_client(guard=guard, client=openai_client)
result = adapter.create(model="gpt-4o-mini", messages=[{"role": "user", "content": "hello"}])
print(result.trace.trace_id, result.trace.decision)
Optional extras:
pip install "sentinos[providers]" # openai + anthropic + boto3 (bedrock)
pip install "sentinos[bedrock]" # boto3 only
pip install "sentinos[grpc]" # grpcio + protobuf (native gRPC protocol smoke/integration)
pip install "sentinos[langchain]" # langchain runtime integrations
Native Kernel gRPC protocol smoke example (non-Go interoperability):
export SENTINOS_GRPC_TARGET="localhost:9091"
export SENTINOS_ACCESS_TOKEN="<jwt-access-token>"
export SENTINOS_ORG_ID="<org-id>"
python examples/protocols/grpc_execute_smoke.py
Live end-to-end OpenAI governance suite (real traffic + alerts/incidents/traces/evidence):
export SENTINOS_E2E_AUTH_MODE=token
export SENTINOS_ORG_ID="<org-id>"
export SENTINOS_ACCESS_TOKEN="<jwt-access-token>"
export OPENAI_API_KEY="<openai-key>"
python examples/live_e2e/run_full_live_e2e.py
Suite details:
examples/live_e2e/README.mdexamples/live_e2e/stage_00_bootstrap_account.pyexamples/live_e2e/stage_01_setup.pyexamples/live_e2e/stage_02_openai_traffic.pyexamples/live_e2e/stage_03_triage.pyexamples/live_e2e/stage_04_verify.py
Factory helpers for low-friction org onboarding:
from sentinos import LLMGuard, SentinosClient, create_openai_chat_adapter
client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-123")
adapter = create_openai_chat_adapter(guard=guard, client=openai_client)
result = adapter.create(model="gpt-4o-mini", messages=[{"role": "user", "content": "hello"}])
print(result.trace.trace_id, result.trace.decision)
Helper for OpenAI-style chat.completions.create signatures:
from sentinos import LLMGuard, SentinosClient, guard_openai_chat_completion
client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-123")
def fake_create(*, model, messages, temperature=0.2):
return {"id": "chat-1", "model": model, "messages": messages}
result = guard_openai_chat_completion(
guard=guard,
create=fake_create,
model="gpt-4o-mini",
messages=[{"role": "user", "content": "hello"}],
)
print(result.response["id"])
OpenRouter factories (provider identity: openrouter):
from sentinos import LLMGuard, SentinosClient, create_openrouter_chat_adapter
client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-openrouter-1")
adapter = create_openrouter_chat_adapter(
guard=guard,
api_key="<OPENROUTER_API_KEY>",
http_referer="https://console.example.com",
x_title="Sentinos Console",
)
result = adapter.create(model="openai/gpt-4.1-mini", messages=[{"role": "user", "content": "hello"}])
print(result.trace.trace_id, result.trace.decision)
Amazon Bedrock native Converse adapters (provider identity: bedrock):
from sentinos import LLMGuard, SentinosClient, create_bedrock_converse_adapter
client = SentinosClient.from_env(org_id="acme")
guard = LLMGuard(kernel=client.kernel, agent_id="assistant-1", session_id="sess-bedrock-1")
adapter = create_bedrock_converse_adapter(guard=guard, region_name="us-east-1")
result = adapter.converse(
model_id="anthropic.claude-3-5-sonnet-20240620-v1:0",
messages=[{"role": "user", "content": [{"text": "summarize incidents"}]}],
)
print(result.trace.trace_id, result.trace.decision)
Quality Gates
Local package checks:
tox -q
Monorepo-only checks (if you have the full Sentinos repo):
scripts/quality/check_python_sdk_quality.sh
scripts/quality/check_sdk_docs_examples.sh
scripts/quality/check_sdk_parity_matrix.py
Reference
- Docs: https://docs.sentinoshq.com/sdk/
- Release runbook (monorepo):
docs/sentinos-python-sdk-release-runbook.md - Release guide (package-local):
RELEASING.md
Trace Forensics and Retention Example
from sentinos import SentinosClient
client = SentinosClient.from_env(org_id="acme")
trace_id = "11111111-1111-1111-1111-111111111111"
ledger = client.traces.ledger_verify(trace_id)
replay = client.traces.replay_trace(trace_id, request={"include_explain": True})
retention = client.traces.get_retention_policy()
dry_run = client.traces.enforce_retention(request={"dry_run": True})
distributed = client.traces.distributed_trace_summaries(limit=25)
print(ledger.verified, replay.drift_detected, retention.trace_days, dry_run.traces_affected, len(distributed))
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sentinos-0.1.5.tar.gz.
File metadata
- Download URL: sentinos-0.1.5.tar.gz
- Upload date:
- Size: 1.3 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4c13047de85152ed53691a8987aa0b82804238b2d729ffcf8c576210263c2737
|
|
| MD5 |
e65e12a246308d26449162a007a01df4
|
|
| BLAKE2b-256 |
4a6b6c946d7fdf144aaa4a71f111ea849b66ae1e4c5601eb2cfdf75e9c9fbb3b
|
Provenance
The following attestation bundles were made for sentinos-0.1.5.tar.gz:
Publisher:
publish.yml on SentinosHQ/sentinos-python
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
sentinos-0.1.5.tar.gz -
Subject digest:
4c13047de85152ed53691a8987aa0b82804238b2d729ffcf8c576210263c2737 - Sigstore transparency entry: 1070354644
- Sigstore integration time:
-
Permalink:
SentinosHQ/sentinos-python@8e31698b6f57139620f7f63471d5a2204aec0680 -
Branch / Tag:
refs/tags/v0.1.5 - Owner: https://github.com/SentinosHQ
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@8e31698b6f57139620f7f63471d5a2204aec0680 -
Trigger Event:
push
-
Statement type:
File details
Details for the file sentinos-0.1.5-py3-none-any.whl.
File metadata
- Download URL: sentinos-0.1.5-py3-none-any.whl
- Upload date:
- Size: 74.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
52d30599ee54997d57ab66a61ad9b63f65975aa108220cfa442af465c61a88c5
|
|
| MD5 |
2f27d370fe83544d8436d48d12388051
|
|
| BLAKE2b-256 |
bc068acb28d1766db9772dd88d8fead0217dd4725f50af0cf39fcbd506f3e3f0
|
Provenance
The following attestation bundles were made for sentinos-0.1.5-py3-none-any.whl:
Publisher:
publish.yml on SentinosHQ/sentinos-python
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
sentinos-0.1.5-py3-none-any.whl -
Subject digest:
52d30599ee54997d57ab66a61ad9b63f65975aa108220cfa442af465c61a88c5 - Sigstore transparency entry: 1070354673
- Sigstore integration time:
-
Permalink:
SentinosHQ/sentinos-python@8e31698b6f57139620f7f63471d5a2204aec0680 -
Branch / Tag:
refs/tags/v0.1.5 - Owner: https://github.com/SentinosHQ
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@8e31698b6f57139620f7f63471d5a2204aec0680 -
Trigger Event:
push
-
Statement type: