Safentic SDK for AI agent runtime enforcement interception.
Project description
Safentic SDK
Safentic is a runtime guardrail SDK for agentic AI systems.
It intercepts and evaluates tool calls between agent intent and execution, enforcing custom safety policies and generating structured audit logs for compliance.
Key Features
- Runtime Protection: Intercepts tool calls at the action boundary
- Policy-Driven: Define safety rules in simple YAML configuration
- Audit Logging: Structured JSON logs for compliance and debugging
- Framework Agnostic: Works with LangChain, AutoGen, MCP, and custom agents
- Easy Integration: Minimal code changes to existing agents
Installation
pip install safentic
Quick Start (5 minutes)
1. Set Up Your Environment
Before using Safentic, configure the required API key:
export OPENAI_API_KEY="your-openai-api-key"
2. Create a Policy File
Create config/policy.yaml to define your safety rules:
tools:
sample_tool:
rules:
- type: llm_verifier
description: "Block outputs that contain disallowed terms"
instruction: "Does this text contain disallowed terms or references?"
model: gpt-4
fields: [body]
response_format: boolean
response_trigger: yes
match_mode: exact
level: block
severity: high
tags: [denylist]
another_tool:
rules: []
logging:
level: INFO
destination: "safentic/logs/safentic_audit.log"
jsonl: "safentic/logs/safentic_audit.jsonl"
3. Wrap Your Agent with SafetyLayer
Import and initialize Safentic in your application:
from safentic.layer import SafetyLayer
from your_agent_module import YourAgentClass
# Initialize your existing agent
agent = YourAgentClass()
# Wrap it with Safentic
safety_layer = SafetyLayer(
agent=agent,
api_key="your-api-key",
agent_id="demo-agent"
)
4. Call Tools Through the Safety Layer
Use the wrapped agent to execute tool calls safely:
try:
result = safety_layer.call_tool("some_tool", {"body": "example input"})
print("Allowed:", result)
except Exception as e:
print("Blocked:", str(e))
Example Output:
Blocked: Blocked by policy
Complete Example
Here's a complete integration example:
from safentic.layer import SafetyLayer
# Step 1: Create or import your agent
class MyAgent:
def execute_tool(self, tool_name, params):
# Your tool logic here
return f"Executed {tool_name}"
agent = MyAgent()
# Step 2: Initialize Safentic
safety_layer = SafetyLayer(
agent=agent,
api_key="your-api-key",
agent_id="my-agent"
)
# Step 3: Execute tools through Safentic
try:
result = safety_layer.call_tool("delete_file", {"path": "/sensitive/data"})
print(f"Success: {result}")
except Exception as e:
print(f"Action blocked: {e}")
# Log to your monitoring system
Configuring Your Policy File
- Safentic enforces rules defined in a YAML configuration file (e.g. policy.yaml).
- By default, it looks for
config/policy.yaml, or you can set the path with:
export SAFENTIC_POLICY_PATH=/path/to/policy.yaml
Policy Schema
Safentic supports the llm_verifier rule type and now also allows deterministic filtering via prechecks.
tools:
<tool_name>:
rules:
- type: llm_verifier
description: "<short description of what this rule enforces>"
instruction: "<prompt instruction given to the verifier LLM>"
model: "<llm model name, e.g. gpt-4>"
fields: [<list of input fields to check>]
reference_file: "<path to reference text file, optional>"
response_format: boolean
response_trigger: yes
match_mode: exact
level: block # enforcement level: block | warn
severity: high # severity: low | medium | high
tags: [<labels for filtering/searching logs>]
prechecks: # Optional deterministic filters before LLM
- check: contains # or 'exact', 'regex'
values: ["refund", "apology"]
action: block # block or warn
logging:
level: INFO
destination: "safentic/logs/safentic_audit.log"
jsonl: "safentic/logs/safentic_audit.jsonl"
Example Policy
tools:
sample_tool:
rules:
- type: llm_verifier
description: "Block outputs that contain disallowed terms"
instruction: "Does this text contain disallowed terms or references?"
model: gpt-4
fields: [body]
reference_file: sample_guidelines.txt
response_format: boolean
response_trigger: yes
match_mode: exact
level: block
severity: high
tags: [sample, denylist]
prechecks:
- check: contains
values: ["refund", "apology", "guarantee", "sorry"]
action: block
- check: regex
values: ["refund policy", "customer (apology|complaint)"]
action: block
another_tool:
rules: [] # Explicitly allow all actions for this tool
logging:
level: INFO
destination: "safentic/logs/safentic_audit.log"
jsonl: "safentic/logs/safentic_audit.jsonl"
Audit Logs
Every decision is logged with context for compliance and debugging:
{
"timestamp": "2025-09-09T14:25:11Z",
"agent_id": "demo-agent",
"tool": "sample_tool",
"allowed": false,
"reason": "Blocked by policy",
"rule": "sample_tool:denylist_check",
"severity": "high",
"level": "block",
"tags": ["sample", "denylist"]
}
Log Fields
| Field | Description |
|---|---|
timestamp |
When the action was evaluated |
agent_id |
The agent issuing the action |
tool |
Tool name |
allowed |
Whether the action was permitted (true/false) |
reason |
Why it was allowed or blocked |
rule |
The rule that applied (if any) |
severity |
Severity of the violation (low, medium, high) |
level |
Enforcement level (block, warn) |
tags |
Categories attached to the rule |
extra |
Additional metadata (e.g., missing fields, matched text) |
CLI Commands
Safentic ships with a CLI for validating policies, running one-off checks, and inspecting logs:
Validate a policy file
safentic validate-policy --policy config/policy.yaml --strict
Run a one-off tool check
safentic check-tool --tool sample_tool \
--input-json '{"body": "some text"}' \
--policy config/policy.yaml
Tail the audit log (JSONL by default)
safentic logs tail --path safentic/logs/safentic_audit.jsonl -f
Environment Variables
Set these before running Safentic:
| Variable | Required | Description |
|---|---|---|
OPENAI_API_KEY |
Yes | API key for OpenAI models used in llm_verifier rules |
SAFENTIC_POLICY_PATH |
No | Path to your policy.yaml (default: config/policy.yaml) |
SAFENTIC_LOG_PATH |
No | Override the default text audit log path |
SAFENTIC_JSON_LOG_PATH |
No | Override the default JSONL audit log path |
LOG_LEVEL |
No | Sets logging verbosity (DEBUG, INFO, WARNING, ERROR) |
Supported Frameworks
Safentic is agent-framework agnostic: it can wrap any agent (LangChain, AutoGen, MCP, custom, etc.) as long as the agent exposes a call_tool method. This means you can enforce policies on tool calls regardless of the agent framework.
Current LLM support:
- The policy engine's semantic checks (LLMVerifier) currently support only OpenAI models (via
OPENAI_API_KEY), but is agent-agnostic, as long as your agent integrates a call_tool method.
Deterministic Filtering:
- You can now use
prechecksfor fast, predictable filtering (contains, exact, regex) before any LLM call. This is useful for blocking or warning on specific phrases or patterns.
Guidelines and Rule Logic:
- The guidelines file is plain text (e.g.,
sample_guidelines.txt). The LLMVerifier uses your rule’s instruction, agent output, and the company policy doc to check for compliance. For sentiment or tone checks, you can use instructions like:- "Does the agent's response express negative sentiment?"
- "Is the tone of this response aggressive or inappropriate?"
- "Does this output violate our company’s respectful communication policy?"
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file safentic-1.0.10.tar.gz.
File metadata
- Download URL: safentic-1.0.10.tar.gz
- Upload date:
- Size: 28.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7ef9f1429033b7d0b37dc364313d356b9433e0bcac3199b56e602472269aafb8
|
|
| MD5 |
9d243a0e3ec1824bb2e834ed4edef7c2
|
|
| BLAKE2b-256 |
e8cce90ed00a7393d15448eba5850aeb9ca280c2b3490bcda26b7f5985b600ec
|
File details
Details for the file safentic-1.0.10-py3-none-any.whl.
File metadata
- Download URL: safentic-1.0.10-py3-none-any.whl
- Upload date:
- Size: 30.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
abd5eb76c690ec978a305a7217e2da67db2eb5d1f65ef417c9876a62680fbdf8
|
|
| MD5 |
73d928b14465909282036fabdf55197c
|
|
| BLAKE2b-256 |
b98867cb66b6044c8faa798686c34ea70bb7a3e17a2cce4c5de51b490bb77aea
|