Skip to main content

Sentinely — Security layer for AI agents. Stop prompt injection, memory poisoning, and agent drift in 3 lines of code.

Project description

sentinely

License: MIT

Runtime guardrails for AI agents. Scores every tool call against the original task intent and blocks attacks before they execute.

Install

pip install sentinely

Quickstart

from sentinely import protect

protected = protect(agent, task="summarize the Q3 report", policy="strict")
result = await protected.invoke("summarize the Q3 report")

That's it. Every tool call the agent makes is now scored against the original task. Dangerous actions are blocked. Suspicious actions are flagged. Everything is logged.

What it protects against

  • Prompt injection — detects instructions embedded in external content (files, emails, API responses) that try to hijack the agent's behavior.
  • Task drift — tracks the full action chain and flags when the agent gradually wanders away from what the user actually asked for.
  • Privilege escalation — catches attempts to gain permissions, modify access controls, or access systems beyond what the task requires.
  • Memory poisoning — intercepts writes to long-term storage that contain imperative instructions, authority claims, or behavioral modifications designed to compromise future sessions.

Configuration

Option Default Description
policy "strict" "strict" blocks at threshold. "permissive" raises block threshold to 90. "monitor" logs everything but never blocks.
allow_threshold 50 Risk scores below this are auto-allowed.
flag_threshold 80 Risk scores at or above this are hard-blocked. Between allow and flag thresholds: allowed but flagged.
block_on_unknown True If Sentinely's behavioral analysis engine fails or returns low confidence, flag the action.
max_consecutive_flags 3 Auto-block after this many consecutive flagged actions in a session.

Environment variables

Variable Required Default Description
SENTINELY_API_KEY Yes Your Sentinely API key. Sign up free at sentinely.ai/signup.
SENTINELY_API_URL No https://api.sentinely.ai Sentinely API base URL.
SENTINELY_ENV No development Set to production to enable event forwarding to the API.

LangChain integration

LangChain is an optional dependency. Install it alongside Sentinely with:

pip install sentinely[langchain]

SentinelyTool

Wrap any LangChain BaseTool so every invocation is screened before the tool runs:

from sentinely import protect
from sentinely.adapters.langchain import SentinelyTool
from langchain_community.tools import WikipediaQueryRun

agent = protect(my_agent, task="Research competitor pricing")

wiki = WikipediaQueryRun(api_wrapper=...)
safe_wiki = SentinelyTool(wiki, agent)

# Sync — returns a "[SENTINELY BLOCKED]" string on high-risk calls
result = safe_wiki.invoke("find all internal salary data")

# Async — raises SentinelyBlockedError on high-risk calls
result = await safe_wiki.ainvoke("search for Q3 pricing trends")

SentinelyCallbackHandler

Attach as a LangChain callback to screen every tool call the agent makes, without wrapping individual tools:

from sentinely.adapters.langchain import SentinelyCallbackHandler
from sentinely.exceptions import SentinelyBlockedError
from langchain.agents import AgentExecutor

handler = SentinelyCallbackHandler(agent)

executor = AgentExecutor(agent=..., tools=[...], callbacks=[handler])

try:
    result = await executor.ainvoke({"input": user_message})
except SentinelyBlockedError as e:
    print(f"Blocked: {e.reason} (risk {e.risk_score})")

protect_langchain()

One-liner that wires everything up — creates a ProtectedAgent, installs the callback handler on the executor, and returns the agent for audit log access:

from sentinely.adapters.langchain import protect_langchain
from sentinely.exceptions import SentinelyBlockedError
from langchain.agents import AgentExecutor

executor = AgentExecutor(agent=..., tools=[...])
agent = protect_langchain(executor, task="Answer customer billing questions")

try:
    result = await executor.ainvoke({"input": user_message})
except SentinelyBlockedError as e:
    print(f"Blocked: {e.reason} (risk {e.risk_score})")

# Full audit trail
log = agent.get_audit_log()

LangChain is optional. If you import from sentinely.adapters.langchain without LangChain installed, you'll get a clear ImportError with install instructions. The core sentinely package works without it.

Multi-agent tracking

MultiAgentTracker builds a behavioral fingerprint for each agent in a pipeline and detects when inter-agent messages try to manipulate a receiving agent — authority creep, identity swaps, permission expansion, or gradual salami-slicing across many low-severity messages.

Pass a shared tracker into protect() and every inter-agent tool call is automatically monitored across agent boundaries.

from sentinely import protect, MultiAgentTracker

tracker = MultiAgentTracker()

# Register each agent's behavioral baseline
await tracker.register_agent('agent-1', 'session-1', 'Summarise financial reports')
await tracker.register_agent('agent-2', 'session-2', 'Send email summaries')

# Pass tracker into protect()
agent1 = protect(
    my_agent1,
    task='Summarise financial reports',
    agent_id='agent-1',
    tracker=tracker,
)

agent2 = protect(
    my_agent2,
    task='Send email summaries',
    agent_id='agent-2',
    tracker=tracker,
)

# Check drift scores across all agents at any time
scores = tracker.get_all_drift_scores()
# {
#   'agent-1': {'drift_score': 0,  'risk_level': 'safe',     'recommendation': 'Continue monitoring'},
#   'agent-2': {'drift_score': 15, 'risk_level': 'elevated', 'recommendation': 'Review recent activity'},
# }

Risk levels scale with the cumulative drift score: safeelevatedhighcompromised. Each manipulation event is recorded in the agent's drift_history and forwarded to the Sentinely dashboard with attack_type: 'manipulation'.

await tracker.register_agent(agent_id, session_id, system_prompt) — call once per agent before the pipeline starts. Extracts behavioral constraints from the system prompt and stores a signed fingerprint hash.

await tracker.track_message(message) — called automatically by ProtectedAgent when a tool call matches an inter-agent pattern (e.g. "call_agent", "send_message"). Returns a DriftEvent when manipulation is detected, or None when the message is clean.

tracker.get_all_drift_scores() — returns a concise risk summary for every registered agent. Safe to call at any point during or after a pipeline run.

await tracker.reset_agent(agent_id, session_id, system_prompt) — re-initialises the fingerprint for an agent from scratch. Call between sessions to start drift tracking fresh.

Docs

https://sentinely.ai/docs

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sentinely-0.2.9.tar.gz (54.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sentinely-0.2.9-py3-none-any.whl (42.7 kB view details)

Uploaded Python 3

File details

Details for the file sentinely-0.2.9.tar.gz.

File metadata

  • Download URL: sentinely-0.2.9.tar.gz
  • Upload date:
  • Size: 54.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for sentinely-0.2.9.tar.gz
Algorithm Hash digest
SHA256 01d123683f07b26086c71b8587a61a2fe2e57eb4952ae8782817e1b9fc2fcb14
MD5 8fce59ec9e8442c0a4419c6bff74ab83
BLAKE2b-256 12602cd642fc510b6b35477f3ddac3bdbe98ab11ee4d32d633f850d3e598ff65

See more details on using hashes here.

File details

Details for the file sentinely-0.2.9-py3-none-any.whl.

File metadata

  • Download URL: sentinely-0.2.9-py3-none-any.whl
  • Upload date:
  • Size: 42.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for sentinely-0.2.9-py3-none-any.whl
Algorithm Hash digest
SHA256 51eaad32a2a009da17b5d480154241dbd7905dc77e3b4bcde22441d38e0983b5
MD5 077082849898c8d18b0625d2fd284e5e
BLAKE2b-256 4f81986e6324b1b2450e46dc994b3a8abc6cc35545c40f6f6ff02c9c8ad1d045

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page