Unified observability and security scanning SDK for AI agents across 23+ frameworks
Project description
Saf3AI SDK
Unified observability and security scanning for AI agents. Telemetry is sent to the Saf3AI collector; optional scanning integrates with the Saf3AI scanner API.
Package indexes
- Stable: pypi.org/project/saf3ai-sdk
Supported frameworks
The SDK targets these runtimes only:
| Framework | Detection | Auto-instrumentation after init() |
|---|---|---|
| Google ADK | Yes | Yes |
| LangChain (incl. LangFlow) | Yes | Yes |
| CrewAI | Yes | Yes |
| Custom (no orchestration package above) | Falls back to custom | Yes (REST/custom path; OpenAI chat completions when applicable) |
Detection order: ADK → CrewAI → LangChain → custom. Install only the stack you use; the SDK selects the matching path.
What you get
- Tracing — OpenTelemetry spans for agents, tools, and LLM calls (conversation IDs, usage, model metadata).
- Security — Prompt/response scanning via callbacks or direct scanner APIs; policy hooks return allow/block.
- Configuration — Environment variables plus
init()parameters; the scanner uses the sameSAF3AI_API_KEYas the collector. You only configure a separate scanner URL (SAF3AI_SCANNER_ENDPOINTorscanner_endpointoninit()).
Requirements
- Python 3.9+
- Valid collector URL and API key (see below)
- For framework features: install Google ADK, LangChain, or CrewAI as needed
Installation
pip install saf3ai-sdk
Configuration
Required
| Variable | Purpose |
|---|---|
SAF3AI_API_KEY |
Organization API key (collector and scanner) |
SAF3AI_COLLECTOR_AGENT |
Collector base URL (traces endpoint is derived by the SDK) |
Pass values into init() explicitly (commonly os.getenv(...)). The SDK also picks up SAF3AI_API_KEY if you omit api_key; the collector URL is not read automatically—you should pass safeai_collector_agent from SAF3AI_COLLECTOR_AGENT (or your config). When SAF3AI_SCANNER_ENDPOINT (or scanner_endpoint) is set, scan requests use that same SAF3AI_API_KEY—no separate scanner secret.
Common optional
| Variable | Purpose |
|---|---|
SAF3AI_AGENT_ID |
Stable agent identifier in telemetry |
SAF3AI_SERVICE_NAME |
Display name for the service (if unset, agent_id from init() is used) |
SAF3AI_SCANNER_ENDPOINT |
Scanner API base URL |
SAF3AI_SCAN_RESPONSES |
Scan model outputs (true / false) |
SAF3AI_CAPTURE_RESPONSES |
Attach response text to spans when enabled |
SAF3AI_DEBUG_MODE |
Verbose SDK logging |
SAF3AI_CONSOLE_OUTPUT |
Print spans to the console |
SAF3AI_OPENAI_AUTO_SCAN |
OpenAI-instrumented prompt/response scan in custom path (true / false) |
Example: shell
export SAF3AI_API_KEY="your-key"
export SAF3AI_COLLECTOR_AGENT="https://your-collector.example.com"
export SAF3AI_SCANNER_ENDPOINT="https://your-scanner.example.com"
Initialize once
Call init() once at process startup, before constructing agents or LLM clients.
import os
from saf3ai_sdk import init
init(
agent_id=os.environ["SAF3AI_AGENT_ID"],
api_key=os.environ["SAF3AI_API_KEY"],
safeai_collector_agent=os.environ["SAF3AI_COLLECTOR_AGENT"],
service_name=os.getenv("SAF3AI_SERVICE_NAME"), # optional
scanner_endpoint=os.getenv("SAF3AI_SCANNER_ENDPOINT"),
)
The SDK detects the installed framework and attaches the matching instrumentation. You do not need to pass framework unless you rely on explicit overrides elsewhere in your app.
Inspect the resolved value:
from saf3ai_sdk import get_detected_framework
print(get_detected_framework()) # e.g. "adk", "langchain", "crewai", "custom"
Security policy (optional)
Callbacks receive scan results; return True to allow, False to block.
def security_policy(text: str, scan_results: dict, text_type: str) -> bool:
"""text_type is \"prompt\" or \"response\" where applicable."""
detections = scan_results.get("detection_results") or {}
for _, result in detections.items():
if isinstance(result, dict) and result.get("result") == "MATCH_FOUND":
return False
return True
Framework integration
Custom agents
Use @traceable on entrypoints. Combine with register_security_callback / get_callback_manager() if you scan before your own LLM calls.
import os
from saf3ai_sdk import init, traceable
from saf3ai_sdk.callbacks.callbacks import register_security_callback, get_callback_manager
init(
agent_id=os.getenv("SAF3AI_AGENT_ID", "my-agent"),
api_key=os.environ["SAF3AI_API_KEY"],
safeai_collector_agent=os.environ["SAF3AI_COLLECTOR_AGENT"],
scanner_endpoint=os.getenv("SAF3AI_SCANNER_ENDPOINT"),
)
register_security_callback(
api_endpoint=os.environ["SAF3AI_SCANNER_ENDPOINT"],
api_key=os.environ["SAF3AI_API_KEY"],
on_scan_complete=security_policy,
)
manager = get_callback_manager()
@traceable(name="chat")
def chat(user_message: str) -> str:
_results, allow = manager.scan_before_llm(
prompt=user_message,
model_name="gpt-4o-mini",
conversation_id=os.getenv("CONVERSATION_ID"),
)
if not allow:
return "Request not allowed."
# call your model here
return "ok"
Google ADK
create_security_callback from the package root is the ADK helper (before/after model hooks).
import os
from saf3ai_sdk import init, create_security_callback
init(
agent_id=os.getenv("SAF3AI_AGENT_ID", "adk-agent"),
api_key=os.environ["SAF3AI_API_KEY"],
safeai_collector_agent=os.environ["SAF3AI_COLLECTOR_AGENT"],
)
before_cb, after_cb = create_security_callback(
api_endpoint=os.environ["SAF3AI_SCANNER_ENDPOINT"],
api_key=os.environ["SAF3AI_API_KEY"],
on_scan_complete=security_policy,
scan_responses=True,
)
# Wire before_cb / after_cb into your ADK Agent or LlmAgent per ADK docs
LangChain
Use the LangChain-specific factory (not the top-level create_security_callback, which is for ADK).
import os
from langchain_openai import ChatOpenAI
from saf3ai_sdk import init
from saf3ai_sdk.callbacks.langchain_callbacks import create_security_callback
init(
agent_id=os.getenv("SAF3AI_AGENT_ID", "langchain-agent"),
api_key=os.environ["SAF3AI_API_KEY"],
safeai_collector_agent=os.environ["SAF3AI_COLLECTOR_AGENT"],
)
cb = create_security_callback(
api_endpoint=os.environ["SAF3AI_SCANNER_ENDPOINT"],
api_key=os.environ["SAF3AI_API_KEY"],
on_scan_complete=security_policy,
scan_responses=True,
conversation_id=os.getenv("CONVERSATION_ID"), # optional stitching
)
llm = ChatOpenAI(model="gpt-4o-mini", callbacks=[cb])
CrewAI
Same pattern as LangChain: CrewAI callbacks module + pass callbacks into the LLM used by Crew.
import os
from langchain_openai import ChatOpenAI
from saf3ai_sdk import init
from saf3ai_sdk.callbacks.crewai_callbacks import create_security_callback
init(
agent_id=os.getenv("SAF3AI_AGENT_ID", "crewai-agent"),
api_key=os.environ["SAF3AI_API_KEY"],
safeai_collector_agent=os.environ["SAF3AI_COLLECTOR_AGENT"],
)
cb = create_security_callback(
api_endpoint=os.environ["SAF3AI_SCANNER_ENDPOINT"],
api_key=os.environ["SAF3AI_API_KEY"],
on_scan_complete=security_policy,
scan_responses=True,
)
llm = ChatOpenAI(model="gpt-4o-mini", callbacks=[cb])
# pass llm into CrewAI Agent / Crew as usual
Cross-framework helper
create_framework_security_callbacks(...) selects adapters by framework name when you want one code path for multiple targets.
API overview
| Symbol | Role |
|---|---|
init(...) |
Start tracer, exporter, auto-instrumentation |
get_detected_framework() |
Resolved framework string |
traceable(...) |
Span wrapper for custom code |
create_security_callback(...) |
ADK before/after callbacks |
create_framework_security_callbacks(...) |
Framework-aware callback bundle |
saf3ai_sdk.callbacks.langchain_callbacks.create_security_callback |
LangChain handler |
saf3ai_sdk.callbacks.crewai_callbacks.create_security_callback |
CrewAI handler |
register_security_callback / get_callback_manager |
Generic prompt gate |
scan_prompt / scan_response / scan_prompt_and_response |
Direct scanner HTTP helpers |
set_scanner_config |
Global scanner URL and key for @traceable auto-scan |
set_custom_attributes / get_custom_attributes / clear_custom_attributes |
Extra span fields |
reset_conversation() |
Clear conversation id in context |
Troubleshooting
| Symptom | Check |
|---|---|
| No traces | SAF3AI_API_KEY, SAF3AI_COLLECTOR_AGENT, network, SDK logs after init() |
| Wrong framework detected | Import order and installed packages (CrewAI before LangChain in detection) |
| Callbacks never run | init() before LLM/agent build; callbacks passed into LLM or ADK agent |
| Scanner silent | SAF3AI_SCANNER_ENDPOINT and SAF3AI_API_KEY; scan_responses=True if you need output scans |
| LangChain import errors | Use langchain_openai.ChatOpenAI (or your stack’s equivalent), not deprecated paths |
License
MIT — see LICENSE.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file saf3ai_sdk-0.2.4.tar.gz.
File metadata
- Download URL: saf3ai_sdk-0.2.4.tar.gz
- Upload date:
- Size: 111.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
50d764b156cbac91cf3538db8486cb2849d8cf9b71c3ac9bb778b59348be6b44
|
|
| MD5 |
1cdacdb93c6c43a789fd12abef31e88f
|
|
| BLAKE2b-256 |
a2c5cb160cd572b416f72d7bdccd3367bdf953f3c4e08acb569d14be3231021d
|
File details
Details for the file saf3ai_sdk-0.2.4-py3-none-any.whl.
File metadata
- Download URL: saf3ai_sdk-0.2.4-py3-none-any.whl
- Upload date:
- Size: 132.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
71ee3e55c6baa369ca31beaad5bb17d4324f0e3c61b405ea0b423fe0be7c743a
|
|
| MD5 |
a57b4a0fa38fc4996e95e8c3eb30efd1
|
|
| BLAKE2b-256 |
90eefbab0812784068a40b095fd26c1a8df005b1400a983937e5a6f86ad54c03
|