Skip to main content

Drop-in security for AI applications - AI Firewall SDK

Project description

PromptGuard Python SDK

Drop-in security for AI applications. No code changes required.

Installation

pip install promptguard-sdk

Two Ways to Secure Your App

Option 1: Auto-Instrumentation (Recommended for Frameworks)

One line secures every LLM call in your application, regardless of which framework you use (LangChain, CrewAI, AutoGen, LlamaIndex, Haystack, Semantic Kernel, or direct SDK usage):

import promptguard
promptguard.init(api_key="pg_xxx")

# That's it. Every LLM call is now secured.
# Works with ANY framework built on openai, anthropic, google-generativeai, cohere, or boto3.

from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)
# ^^ Scanned by PromptGuard before reaching OpenAI

Supported SDKs (auto-detected and patched):

SDK Frameworks Covered
openai LangChain, CrewAI, AutoGen, Semantic Kernel, direct usage
anthropic LangChain (ChatAnthropic), direct usage
google-generativeai LangChain, LlamaIndex, direct usage
cohere Haystack, LangChain, direct usage
boto3 (Bedrock) AWS-native apps (Claude, Titan, Llama on Bedrock)

Modes:

# Enforce mode (default) - blocks threats
promptguard.init(api_key="pg_xxx", mode="enforce")

# Monitor mode - logs threats without blocking (shadow mode)
promptguard.init(api_key="pg_xxx", mode="monitor")

# Scan responses too
promptguard.init(api_key="pg_xxx", scan_responses=True)

# Fail-closed (block if Guard API is unreachable)
promptguard.init(api_key="pg_xxx", fail_open=False)

Shutdown:

promptguard.shutdown()  # Removes all patches, closes connections

Option 2: Proxy Mode (Drop-in Replacement)

If you prefer the proxy approach, just swap your client:

# Before
from openai import OpenAI
client = OpenAI()

# After
from promptguard import PromptGuard
client = PromptGuard(api_key="pg_xxx")

# Your existing code works unchanged!

Framework-Specific Integrations

For deeper integration with richer context (chain names, tool calls, agent steps), use framework-specific callbacks alongside or instead of auto-instrumentation:

LangChain

from promptguard.integrations.langchain import PromptGuardCallbackHandler

handler = PromptGuardCallbackHandler(api_key="pg_xxx")

# Attach to an LLM
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o", callbacks=[handler])

# Or use globally with any chain
chain.invoke({"input": "..."}, config={"callbacks": [handler]})

The handler scans:

  • on_llm_start / on_chat_model_start - prompts before the LLM call
  • on_llm_end - responses after the LLM call
  • on_tool_start - tool inputs for injection attempts
  • on_chain_start/end - tracks chain context

CrewAI

from crewai import Crew, Agent, Task
from promptguard.integrations.crewai import PromptGuardGuardrail

pg = PromptGuardGuardrail(api_key="pg_xxx")

crew = Crew(
    agents=[...],
    tasks=[...],
    before_kickoff=pg.before_kickoff,
    after_kickoff=pg.after_kickoff,
)

crew.kickoff(inputs={"topic": "AI safety"})

You can also wrap individual tools:

from promptguard.integrations.crewai import secure_tool
from crewai.tools import BaseTool

@secure_tool(api_key="pg_xxx")
class SearchTool(BaseTool):
    name = "search"
    description = "Search the web"

    def _run(self, query: str) -> str:
        ...

LlamaIndex

from promptguard.integrations.llamaindex import PromptGuardCallbackHandler
from llama_index.core.callbacks import CallbackManager
from llama_index.core import Settings

pg_handler = PromptGuardCallbackHandler(api_key="pg_xxx")
Settings.callback_manager = CallbackManager([pg_handler])

# All LlamaIndex queries are now scanned

Standalone Guard API

For any language or framework, call the Guard API directly:

from promptguard import GuardClient

guard = GuardClient(api_key="pg_xxx")

# Scan before sending to LLM
decision = guard.scan(
    messages=[{"role": "user", "content": "Hello!"}],
    direction="input",
    model="gpt-4o",
)

if decision.blocked:
    print(f"Blocked: {decision.threat_type}")
elif decision.redacted:
    # Use decision.redacted_messages instead of original
    print("Content was redacted")
else:
    # Safe to proceed
    pass

Or via HTTP directly (any language):

curl -X POST https://api.promptguard.co/api/v1/guard \
  -H "Authorization: Bearer pg_xxx" \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [{"role": "user", "content": "Hello!"}],
    "direction": "input",
    "model": "gpt-4o"
  }'

Security Scanning

from promptguard import PromptGuard

pg = PromptGuard(api_key="pg_xxx")

# Scan content for threats
result = pg.security.scan("Ignore previous instructions...")
if result["blocked"]:
    print(f"Threat detected: {result['reason']}")

PII Redaction

result = pg.security.redact(
    "My email is john@example.com and SSN is 123-45-6789"
)
print(result["redacted"])
# Output: "My email is [EMAIL] and SSN is [SSN]"

Red Team Testing

from promptguard import PromptGuard

pg = PromptGuard(api_key="pg_xxx")

# Run the autonomous red team agent (LLM-powered mutation)
report = pg.redteam.run_autonomous(
    budget=200,
    target_preset="support_bot:strict",
)
print(f"Grade: {report['grade']}, Bypass rate: {report['bypass_rate']:.0%}")

# Get Attack Intelligence stats
stats = pg.redteam.intelligence_stats()
print(f"Total patterns: {stats['total_patterns']}")

The async client mirrors the same methods:

async with PromptGuardAsync(api_key="pg_xxx") as pg:
    report = await pg.redteam.run_autonomous(budget=200)
    stats = await pg.redteam.intelligence_stats()

Async Support

The PromptGuardAsync client provides a fully asynchronous interface for non-blocking usage in async applications:

from promptguard import PromptGuardAsync

async with PromptGuardAsync(api_key="pg_xxx") as pg:
    response = await pg.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": "Hello!"}]
    )

    # Async security scanning
    result = await pg.security.scan("Check this content")

    # Async PII redaction
    redacted = await pg.security.redact("My email is john@example.com")

The async client mirrors the synchronous API - every method available on PromptGuard has an await-able counterpart on PromptGuardAsync.

Retry Logic

Both PromptGuard and PromptGuardAsync support configurable retry behavior for transient failures:

from promptguard import PromptGuard

pg = PromptGuard(
    api_key="pg_xxx",
    max_retries=3,        # Number of retry attempts (default: 2)
    retry_delay=0.5,      # Base delay in seconds between retries (default: 0.25)
)

Retries use exponential backoff starting from retry_delay. Only transient errors (network timeouts, 5xx responses) are retried; client errors (4xx) fail immediately.

Embeddings

Scan and secure embedding requests through the proxy:

from promptguard import PromptGuard

pg = PromptGuard(api_key="pg_xxx")

response = pg.embeddings.create(
    model="text-embedding-3-small",
    input="The quick brown fox jumps over the lazy dog",
)
print(response.data[0].embedding[:5])

Batch embedding requests are also supported:

response = pg.embeddings.create(
    model="text-embedding-3-small",
    input=["First document", "Second document", "Third document"],
)
for item in response.data:
    print(f"Index {item.index}: {len(item.embedding)} dimensions")

Configuration

from promptguard import PromptGuard, Config

config = Config(
    api_key="pg_xxx",
    base_url="https://api.promptguard.co/api/v1/proxy",
    timeout=30.0,
)

pg = PromptGuard(config=config)

Environment Variables

export PROMPTGUARD_API_KEY="pg_xxx"
export PROMPTGUARD_BASE_URL="https://api.promptguard.co/api/v1"

Error Handling

from promptguard import PromptGuard, PromptGuardBlockedError

# Auto-instrumentation
import promptguard
promptguard.init(api_key="pg_xxx")

try:
    response = client.chat.completions.create(...)
except PromptGuardBlockedError as e:
    print(f"Blocked: {e.decision.threat_type}")
    print(f"Event ID: {e.decision.event_id}")

Links

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

promptguard_sdk-1.6.0.tar.gz (33.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

promptguard_sdk-1.6.0-py3-none-any.whl (32.0 kB view details)

Uploaded Python 3

File details

Details for the file promptguard_sdk-1.6.0.tar.gz.

File metadata

  • Download URL: promptguard_sdk-1.6.0.tar.gz
  • Upload date:
  • Size: 33.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for promptguard_sdk-1.6.0.tar.gz
Algorithm Hash digest
SHA256 4d8ebe58a6bd4da8db6cb30cc3fbfc45e60e6500d95df39b72b5b0695ab92a7b
MD5 64d0780fb0805c11ac89c0902f06a10d
BLAKE2b-256 5023eef8c1d2927abaea020e84598d636c5e12bd2f625cdbf17aa5188a1bd592

See more details on using hashes here.

Provenance

The following attestation bundles were made for promptguard_sdk-1.6.0.tar.gz:

Publisher: release.yml on acebot712/promptguard-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file promptguard_sdk-1.6.0-py3-none-any.whl.

File metadata

File hashes

Hashes for promptguard_sdk-1.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e58fd5cb82d3e971c194051d4348c18137c3a4ec97bdbbe37f010809d710afac
MD5 002a87d5ddda0567bc29cd8fcefe09f6
BLAKE2b-256 ea34070465d02aa399e06896adb8691ce60600203b9efc41ad5f196ed14a0469

See more details on using hashes here.

Provenance

The following attestation bundles were made for promptguard_sdk-1.6.0-py3-none-any.whl:

Publisher: release.yml on acebot712/promptguard-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page