Safentic SDK for AI agent runtime enforcement interception.
Project description
Safentic SDK
- Safentic is a runtime guardrail SDK for agentic AI systems.
- It intercepts and evaluates tool calls between agent intent and execution, enforcing custom safety policies and generating structured audit logs for compliance.
Installation
pip install safentic
Quickstart: Wrap Your Agent
- Safentic works at the action boundary, not inside the model itself. You wrap your agent with SafetyLayer:
from safentic.layer import SafetyLayer
from agent import AgentClassInstance # your existing agent
agent = AgentClassInstance()
Wrap with Safentic
layer = SafetyLayer(agent=agent, api_key="your-api-key", agent_id="demo-agent")
Example tool call
try:
result = layer.call_tool("some_tool", {"body": "example input"})
print(result)
except Exception as e:
print("Blocked:", e)
Output:
- Blocked: Blocked by policy
Configuring Your Policy File
- Safentic enforces rules defined in a YAML configuration file (e.g. policy.yaml).
- By default, it looks for config/policy.yaml, or you can set the path with:
export SAFENTIC_POLICY_PATH=/path/to/policy.yaml
Schema
- At the moment, Safentic supports the llm_verifier rule type.
tools:
<tool_name>:
rules:
- type: llm_verifier
description: "<short description of what this rule enforces>"
instruction: "<prompt instruction given to the verifier LLM>"
model: "<llm model name, e.g. gpt-4>"
fields: [<list of input fields to check>]
reference_file: "<path to reference text file, optional>"
response_format: boolean
response_trigger: yes
match_mode: exact
level: block # enforcement level: block | warn
severity: high # severity: low | medium | high
tags: [<labels for filtering/searching logs>]
logging:
level: INFO
destination: "safentic/logs/txt_logs/safentic_audit.log"
jsonl: "safentic/logs/json_logs/safentic_audit.jsonl"
Example Policy (obfuscated)
tools:
sample_tool:
rules:
- type: llm_verifier
description: "Block outputs that contain disallowed terms"
instruction: "Does this text contain disallowed terms or references?"
model: gpt-4
fields: [body]
reference_file: sample_guidelines.txt
response_format: boolean
response_trigger: yes
match_mode: exact
level: block
severity: high
tags: [sample, denylist]
another_tool:
rules: [] # Explicitly allow all actions for this tool
logging:
level: INFO
destination: "safentic/logs/txt_logs/safentic_audit.log"
jsonl: "safentic/logs/json_logs/safentic_audit.jsonl"
Audit Logs
- Every decision is logged with context for compliance and debugging:
{
"timestamp": "2025-09-09T14:25:11Z",
"agent_id": "demo-agent",
"tool": "sample_tool",
"allowed": false,
"reason": "Blocked by policy",
"rule": "sample_tool:denylist_check",
"severity": "high",
"level": "block",
"tags": ["sample", "denylist"]
}
Log Fields
- timestamp – when the action was evaluated
- agent_id – the agent issuing the action
- tool – tool name
- allowed – whether the action was permitted
- reason – why it was allowed or blocked
- rule – the rule that applied (if any)
- severity – severity of the violation
- level – enforcement level (block, warn)
- tags – categories attached to the rule
- extra – additional metadata (e.g., missing fields, matched text)
CLI Commands
- Safentic ships with a CLI for validating policies, running one-off checks, and inspecting logs:
Validate a policy file
safentic validate-policy --policy config/policy.yaml --strict
Run a one-off tool check
safentic check-tool --tool sample_tool \
--input-json '{"body": "some text"}' \
--policy config/policy.yaml
Tail the audit log (JSONL by default)
safentic logs tail --path safentic/logs/json_logs/safentic_audit.jsonl -f
Environment Variables
Set these before running Safentic:
OPENAI_API_KEY– required for rules that use llm_verifier (e.g., GPT-4).SAFENTIC_POLICY_PATH– path to your policy.yaml (default: config/policy.yaml).SAFENTIC_LOG_PATH– override the default text audit log path.SAFENTIC_JSON_LOG_PATH– override the default JSONL audit log path.LOG_LEVEL– optional, sets verbosity (DEBUG, INFO, etc.).
Supported Stacks
- Safentic integrates with frameworks like LangChain, AutoGen, and MCP by wrapping the tool dispatcher rather than modifying the model or prompts.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
safentic-1.0.6.tar.gz
(25.1 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
safentic-1.0.6-py3-none-any.whl
(28.5 kB
view details)
File details
Details for the file safentic-1.0.6.tar.gz.
File metadata
- Download URL: safentic-1.0.6.tar.gz
- Upload date:
- Size: 25.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c5bf0f4d94031c2d7d52b0f3d58d6e4b83517b2ad68d94348220e415f99497ce
|
|
| MD5 |
cc32789c68d2a0e08e4e771976233804
|
|
| BLAKE2b-256 |
960c6a279d1a85b57a6245a4e719419e524164bb78e597b1eb710adca891d5ec
|
File details
Details for the file safentic-1.0.6-py3-none-any.whl.
File metadata
- Download URL: safentic-1.0.6-py3-none-any.whl
- Upload date:
- Size: 28.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
50ad69166fad682c04c9ca0d52ad04b5cc297e97e6bf814ab4efbecc7fdee746
|
|
| MD5 |
1c27fe07508142a42aa13a13990d6e9c
|
|
| BLAKE2b-256 |
d147cfb15cd7e218348432b4ff7e74ed92c33e1614f32387f2297d87cb864aaa
|