Skip to main content

Safentic SDK for AI agent runtime enforcement interception.

Project description

Safentic SDK

Safentic is a runtime guardrail SDK for agentic AI systems.

It intercepts and evaluates tool calls between agent intent and execution, enforcing custom safety policies and generating structured audit logs for compliance.

Key Features

  • Runtime Protection: Intercepts tool calls at the action boundary
  • Policy-Driven: Define safety rules in simple YAML configuration
  • Audit Logging: Structured JSON logs for compliance and debugging
  • Framework Agnostic: Works with LangChain, AutoGen, MCP, and custom agents
  • Easy Integration: Minimal code changes to existing agents

Installation

pip install safentic

Quick Start (5 minutes)

1. Set Up Your Environment

Before using Safentic, configure the required API key:

export OPENAI_API_KEY="your-openai-api-key"

2. Create a Policy File

Create config/policy.yaml to define your safety rules:

tools:
  sample_tool:
    rules:
      - type: llm_verifier
        description: "Block outputs that contain disallowed terms"
        instruction: "Does this text contain disallowed terms or references?"
        model: gpt-4
        fields: [body]
        response_format: boolean
        response_trigger: yes
        match_mode: exact
        level: block
        severity: high
        tags: [denylist]

  another_tool:
    rules: []  

logging:
  level: INFO
  destination: "safentic/logs/safentic_audit.log"
  jsonl: "safentic/logs/safentic_audit.jsonl"

3. Wrap Your Agent with SafetyLayer

Import and initialize Safentic in your application:

from safentic.layer import SafetyLayer
from your_agent_module import YourAgentClass

# Initialize your existing agent
agent = YourAgentClass()

# Wrap it with Safentic
safety_layer = SafetyLayer(
    agent=agent,
    api_key="your-api-key",  
    agent_id="demo-agent"
)

4. Call Tools Through the Safety Layer

Use the wrapped agent to execute tool calls safely:

try:
    result = safety_layer.call_tool("some_tool", {"body": "example input"})
    print("Allowed:", result)
except Exception as e:
    print("Blocked:", str(e))

Example Output:

Blocked: Blocked by policy

Complete Example

Here's a complete integration example:

from safentic.layer import SafetyLayer

# Step 1: Create or import your agent
class MyAgent:
    def execute_tool(self, tool_name, params):
        # Your tool logic here
        return f"Executed {tool_name}"

agent = MyAgent()

# Step 2: Initialize Safentic
safety_layer = SafetyLayer(
    agent=agent,
    api_key="your-api-key",
    agent_id="my-agent"
)

# Step 3: Execute tools through Safentic
try:
    result = safety_layer.call_tool("delete_file", {"path": "/sensitive/data"})
    print(f"Success: {result}")
except Exception as e:
    print(f"Action blocked: {e}")
    # Log to your monitoring system

Configuring Your Policy File

  • Safentic enforces rules defined in a YAML configuration file (e.g. policy.yaml).
  • By default, it looks for config/policy.yaml, or you can set the path with:
export SAFENTIC_POLICY_PATH=/path/to/policy.yaml

Policy Schema

At the moment, Safentic supports the llm_verifier rule type.

tools:
  <tool_name>:
    rules:
      - type: llm_verifier
        description: "<short description of what this rule enforces>"
        instruction: "<prompt instruction given to the verifier LLM>"
        model: "<llm model name, e.g. gpt-4>"
        fields: [<list of input fields to check>]
        reference_file: "<path to reference text file, optional>"
        response_format: boolean
        response_trigger: yes
        match_mode: exact
        level: block         # enforcement level: block | warn
        severity: high       # severity: low | medium | high
        tags: [<labels for filtering/searching logs>]

logging:
  level: INFO
  destination: "safentic/logs/safentic_audit.log"
  jsonl: "safentic/logs/safentic_audit.jsonl"

Example Policy

tools:
  sample_tool:
    rules:
      - type: llm_verifier
        description: "Block outputs that contain disallowed terms"
        instruction: "Does this text contain disallowed terms or references?"
        model: gpt-4
        fields: [body]
        reference_file: sample_guidelines.txt
        response_format: boolean
        response_trigger: yes
        match_mode: exact
        level: block
        severity: high
        tags: [sample, denylist]

  another_tool:
    rules: []  # Explicitly allow all actions for this tool

logging:
  level: INFO
  destination: "safentic/logs/safentic_audit.log"
  jsonl: "safentic/logs/safentic_audit.jsonl"

Audit Logs

Every decision is logged with context for compliance and debugging:

{
  "timestamp": "2025-09-09T14:25:11Z",
  "agent_id": "demo-agent",
  "tool": "sample_tool",
  "allowed": false,
  "reason": "Blocked by policy",
  "rule": "sample_tool:denylist_check",
  "severity": "high",
  "level": "block",
  "tags": ["sample", "denylist"]
}

Log Fields

Field Description
timestamp When the action was evaluated
agent_id The agent issuing the action
tool Tool name
allowed Whether the action was permitted (true/false)
reason Why it was allowed or blocked
rule The rule that applied (if any)
severity Severity of the violation (low, medium, high)
level Enforcement level (block, warn)
tags Categories attached to the rule
extra Additional metadata (e.g., missing fields, matched text)

CLI Commands

Safentic ships with a CLI for validating policies, running one-off checks, and inspecting logs:

Validate a policy file

safentic validate-policy --policy config/policy.yaml --strict

Run a one-off tool check

safentic check-tool --tool sample_tool \
  --input-json '{"body": "some text"}' \
  --policy config/policy.yaml

Tail the audit log (JSONL by default)

safentic logs tail --path safentic/logs/safentic_audit.jsonl -f

Environment Variables

Set these before running Safentic:

Variable Required Description
OPENAI_API_KEY Yes API key for OpenAI models used in llm_verifier rules
SAFENTIC_POLICY_PATH No Path to your policy.yaml (default: config/policy.yaml)
SAFENTIC_LOG_PATH No Override the default text audit log path
SAFENTIC_JSON_LOG_PATH No Override the default JSONL audit log path
LOG_LEVEL No Sets logging verbosity (DEBUG, INFO, WARNING, ERROR)

Supported Frameworks

Safentic integrates with popular agent frameworks by wrapping the tool dispatcher:

  • LangChain: Wrap your agent's tool execution
  • AutoGen: Intercept tool calls from agent conversations
  • MCP: Compatible with Model Context Protocol servers
  • Custom Agents: Works with any agent that delegates tool calls

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

safentic-1.0.8.tar.gz (26.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

safentic-1.0.8-py3-none-any.whl (29.1 kB view details)

Uploaded Python 3

File details

Details for the file safentic-1.0.8.tar.gz.

File metadata

  • Download URL: safentic-1.0.8.tar.gz
  • Upload date:
  • Size: 26.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for safentic-1.0.8.tar.gz
Algorithm Hash digest
SHA256 134de26e0f0dccb8fac0a6aafbc1dca7c1a693a02078e06617f20057045b5d67
MD5 8fb9946624709c03293aff7ef8735689
BLAKE2b-256 26b6a31721550e059c0ed57ecf080fd28a8a3d2d2807eb1cf2c81e723d50901d

See more details on using hashes here.

File details

Details for the file safentic-1.0.8-py3-none-any.whl.

File metadata

  • Download URL: safentic-1.0.8-py3-none-any.whl
  • Upload date:
  • Size: 29.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for safentic-1.0.8-py3-none-any.whl
Algorithm Hash digest
SHA256 c604546a773997c183eb6b3b1f0059b3025c9cf08bcb1785e63544e7beaef246
MD5 67ab603717bc4106d02cdd3f10471f64
BLAKE2b-256 cbc6eb3bc41d9b83ee3ef5217dd58ffa3d70cbcbae272b3c89296d30a9e21318

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page