Drop-in security for AI applications - AI Firewall SDK
Project description
PromptGuard Python SDK
Drop-in security for AI applications. No code changes required.
Installation
pip install promptguard-sdk
Two Ways to Secure Your App
Option 1: Auto-Instrumentation (Recommended for Frameworks)
One line secures every LLM call in your application, regardless of which framework you use (LangChain, CrewAI, AutoGen, LlamaIndex, Haystack, Semantic Kernel, or direct SDK usage):
import promptguard
promptguard.init(api_key="pg_xxx")
# That's it. Every LLM call is now secured.
# Works with ANY framework built on openai, anthropic, google-generativeai, cohere, or boto3.
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
# ^^ Scanned by PromptGuard before reaching OpenAI
Supported SDKs (auto-detected and patched):
| SDK | Frameworks Covered |
|---|---|
openai |
LangChain, CrewAI, AutoGen, Semantic Kernel, direct usage |
anthropic |
LangChain (ChatAnthropic), direct usage |
google-generativeai |
LangChain, LlamaIndex, direct usage |
cohere |
Haystack, LangChain, direct usage |
boto3 (Bedrock) |
AWS-native apps (Claude, Titan, Llama on Bedrock) |
Modes:
# Enforce mode (default) - blocks threats
promptguard.init(api_key="pg_xxx", mode="enforce")
# Monitor mode - logs threats without blocking (shadow mode)
promptguard.init(api_key="pg_xxx", mode="monitor")
# Scan responses too
promptguard.init(api_key="pg_xxx", scan_responses=True)
# Fail-closed (block if Guard API is unreachable)
promptguard.init(api_key="pg_xxx", fail_open=False)
Shutdown:
promptguard.shutdown() # Removes all patches, closes connections
Option 2: Proxy Mode (Drop-in Replacement)
If you prefer the proxy approach, just swap your client:
# Before
from openai import OpenAI
client = OpenAI()
# After
from promptguard import PromptGuard
client = PromptGuard(api_key="pg_xxx")
# Your existing code works unchanged!
Framework-Specific Integrations
For deeper integration with richer context (chain names, tool calls, agent steps), use framework-specific callbacks alongside or instead of auto-instrumentation:
LangChain
from promptguard.integrations.langchain import PromptGuardCallbackHandler
handler = PromptGuardCallbackHandler(api_key="pg_xxx")
# Attach to an LLM
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o", callbacks=[handler])
# Or use globally with any chain
chain.invoke({"input": "..."}, config={"callbacks": [handler]})
The handler scans:
on_llm_start/on_chat_model_start- prompts before the LLM callon_llm_end- responses after the LLM callon_tool_start- tool inputs for injection attemptson_chain_start/end- tracks chain context
CrewAI
from crewai import Crew, Agent, Task
from promptguard.integrations.crewai import PromptGuardGuardrail
pg = PromptGuardGuardrail(api_key="pg_xxx")
crew = Crew(
agents=[...],
tasks=[...],
before_kickoff=pg.before_kickoff,
after_kickoff=pg.after_kickoff,
)
crew.kickoff(inputs={"topic": "AI safety"})
You can also wrap individual tools:
from promptguard.integrations.crewai import secure_tool
from crewai.tools import BaseTool
@secure_tool(api_key="pg_xxx")
class SearchTool(BaseTool):
name = "search"
description = "Search the web"
def _run(self, query: str) -> str:
...
LlamaIndex
from promptguard.integrations.llamaindex import PromptGuardCallbackHandler
from llama_index.core.callbacks import CallbackManager
from llama_index.core import Settings
pg_handler = PromptGuardCallbackHandler(api_key="pg_xxx")
Settings.callback_manager = CallbackManager([pg_handler])
# All LlamaIndex queries are now scanned
Standalone Guard API
For any language or framework, call the Guard API directly:
from promptguard import GuardClient
guard = GuardClient(api_key="pg_xxx")
# Scan before sending to LLM
decision = guard.scan(
messages=[{"role": "user", "content": "Hello!"}],
direction="input",
model="gpt-4o",
)
if decision.blocked:
print(f"Blocked: {decision.threat_type}")
elif decision.redacted:
# Use decision.redacted_messages instead of original
print("Content was redacted")
else:
# Safe to proceed
pass
Or via HTTP directly (any language):
curl -X POST https://api.promptguard.co/api/v1/guard \
-H "Authorization: Bearer pg_xxx" \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "Hello!"}],
"direction": "input",
"model": "gpt-4o"
}'
Security Scanning
from promptguard import PromptGuard
pg = PromptGuard(api_key="pg_xxx")
# Scan content for threats
result = pg.security.scan("Ignore previous instructions...")
if result["blocked"]:
print(f"Threat detected: {result['reason']}")
PII Redaction
result = pg.security.redact(
"My email is john@example.com and SSN is 123-45-6789"
)
print(result["redacted"])
# Output: "My email is [EMAIL] and SSN is [SSN]"
Red Team Testing
from promptguard import PromptGuard
pg = PromptGuard(api_key="pg_xxx")
# Run the autonomous red team agent (LLM-powered mutation)
report = pg.redteam.run_autonomous(
budget=200,
target_preset="support_bot:strict",
)
print(f"Grade: {report['grade']}, Bypass rate: {report['bypass_rate']:.0%}")
# Get Attack Intelligence stats
stats = pg.redteam.intelligence_stats()
print(f"Total patterns: {stats['total_patterns']}")
The async client mirrors the same methods:
async with PromptGuardAsync(api_key="pg_xxx") as pg:
report = await pg.redteam.run_autonomous(budget=200)
stats = await pg.redteam.intelligence_stats()
Async Support
The PromptGuardAsync client provides a fully asynchronous interface for non-blocking usage in async applications:
from promptguard import PromptGuardAsync
async with PromptGuardAsync(api_key="pg_xxx") as pg:
response = await pg.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
# Async security scanning
result = await pg.security.scan("Check this content")
# Async PII redaction
redacted = await pg.security.redact("My email is john@example.com")
The async client mirrors the synchronous API - every method available on PromptGuard has an await-able counterpart on PromptGuardAsync.
Retry Logic
Both PromptGuard and PromptGuardAsync support configurable retry behavior for transient failures:
from promptguard import PromptGuard
pg = PromptGuard(
api_key="pg_xxx",
max_retries=3, # Number of retry attempts (default: 2)
retry_delay=0.5, # Base delay in seconds between retries (default: 0.25)
)
Retries use exponential backoff starting from retry_delay. Only transient errors (network timeouts, 5xx responses) are retried; client errors (4xx) fail immediately.
Embeddings
Scan and secure embedding requests through the proxy:
from promptguard import PromptGuard
pg = PromptGuard(api_key="pg_xxx")
response = pg.embeddings.create(
model="text-embedding-3-small",
input="The quick brown fox jumps over the lazy dog",
)
print(response.data[0].embedding[:5])
Batch embedding requests are also supported:
response = pg.embeddings.create(
model="text-embedding-3-small",
input=["First document", "Second document", "Third document"],
)
for item in response.data:
print(f"Index {item.index}: {len(item.embedding)} dimensions")
Configuration
from promptguard import PromptGuard, Config
config = Config(
api_key="pg_xxx",
base_url="https://api.promptguard.co/api/v1/proxy",
timeout=30.0,
)
pg = PromptGuard(config=config)
Environment Variables
export PROMPTGUARD_API_KEY="pg_xxx"
export PROMPTGUARD_BASE_URL="https://api.promptguard.co/api/v1"
Error Handling
from promptguard import PromptGuard, PromptGuardBlockedError
# Auto-instrumentation
import promptguard
promptguard.init(api_key="pg_xxx")
try:
response = client.chat.completions.create(...)
except PromptGuardBlockedError as e:
print(f"Blocked: {e.decision.threat_type}")
print(f"Event ID: {e.decision.event_id}")
Links
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file promptguard_sdk-1.7.0.tar.gz.
File metadata
- Download URL: promptguard_sdk-1.7.0.tar.gz
- Upload date:
- Size: 37.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
134803a5adaa5ae4d3c5e541f83d8ba02a7837156391054c2663b106ccd4659b
|
|
| MD5 |
b6dc4c6bca4ad0028db056d528e909f0
|
|
| BLAKE2b-256 |
f6fe06f0c8b7ea026bf4bb3f4abd478002ce6e662aa99681200f330786670c0a
|
Provenance
The following attestation bundles were made for promptguard_sdk-1.7.0.tar.gz:
Publisher:
release.yml on acebot712/promptguard-python
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
promptguard_sdk-1.7.0.tar.gz -
Subject digest:
134803a5adaa5ae4d3c5e541f83d8ba02a7837156391054c2663b106ccd4659b - Sigstore transparency entry: 1246163879
- Sigstore integration time:
-
Permalink:
acebot712/promptguard-python@2b104987fe37725eb5e9712cdb93a61904670aad -
Branch / Tag:
refs/tags/v1.7.0 - Owner: https://github.com/acebot712
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@2b104987fe37725eb5e9712cdb93a61904670aad -
Trigger Event:
release
-
Statement type:
File details
Details for the file promptguard_sdk-1.7.0-py3-none-any.whl.
File metadata
- Download URL: promptguard_sdk-1.7.0-py3-none-any.whl
- Upload date:
- Size: 32.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
39ed14a50e2e63e4f52b44a6221a0201c03f92c266449d97a48d8d4b4733d390
|
|
| MD5 |
428b07ee428fca8d2b5227da704f3be9
|
|
| BLAKE2b-256 |
8cd395ff2964383a64330e5303f7045d7b45e4985f35a3dec2512209ae64c4ff
|
Provenance
The following attestation bundles were made for promptguard_sdk-1.7.0-py3-none-any.whl:
Publisher:
release.yml on acebot712/promptguard-python
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
promptguard_sdk-1.7.0-py3-none-any.whl -
Subject digest:
39ed14a50e2e63e4f52b44a6221a0201c03f92c266449d97a48d8d4b4733d390 - Sigstore transparency entry: 1246163884
- Sigstore integration time:
-
Permalink:
acebot712/promptguard-python@2b104987fe37725eb5e9712cdb93a61904670aad -
Branch / Tag:
refs/tags/v1.7.0 - Owner: https://github.com/acebot712
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@2b104987fe37725eb5e9712cdb93a61904670aad -
Trigger Event:
release
-
Statement type: