Skip to main content

Drop-in security for AI applications - AI Firewall SDK

Project description

PyPI version CI License Python

PromptGuard Python SDK

Drop-in security for AI applications. No code changes required.

Installation

pip install promptguard-sdk

Two Ways to Secure Your App

Option 1: Auto-Instrumentation (Recommended for Frameworks)

One line secures every LLM call in your application, regardless of which framework you use (LangChain, CrewAI, AutoGen, LlamaIndex, Haystack, Semantic Kernel, or direct SDK usage):

import promptguard
promptguard.init(api_key="pg_xxx")

# That's it. Every LLM call is now secured.
# Works with ANY framework built on openai, anthropic, google-generativeai, cohere, or boto3.

from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
    model="gpt-5-nano",
    messages=[{"role": "user", "content": "Hello!"}]
)
# ^^ Scanned by PromptGuard before reaching OpenAI

Supported SDKs (auto-detected and patched):

SDK Frameworks Covered
openai LangChain, CrewAI, AutoGen, Semantic Kernel, direct usage
anthropic LangChain (ChatAnthropic), direct usage
google-generativeai LangChain, LlamaIndex, direct usage
cohere Haystack, LangChain, direct usage
boto3 (Bedrock) AWS-native apps (Claude, Titan, Llama on Bedrock)

Modes:

# Enforce mode (default) - blocks threats
promptguard.init(api_key="pg_xxx", mode="enforce")

# Monitor mode - logs threats without blocking (shadow mode)
promptguard.init(api_key="pg_xxx", mode="monitor")

# Scan responses too
promptguard.init(api_key="pg_xxx", scan_responses=True)

# Fail-closed (block if Guard API is unreachable)
promptguard.init(api_key="pg_xxx", fail_open=False)

Shutdown:

promptguard.shutdown()  # Removes all patches, closes connections

Option 2: Proxy Mode (Drop-in Replacement)

If you prefer the proxy approach, just swap your client:

# Before
from openai import OpenAI
client = OpenAI()

# After
from promptguard import PromptGuard
client = PromptGuard(api_key="pg_xxx")

# Your existing code works unchanged!

Framework-Specific Integrations

For deeper integration with richer context (chain names, tool calls, agent steps), use framework-specific callbacks alongside or instead of auto-instrumentation:

LangChain

from promptguard.integrations.langchain import PromptGuardCallbackHandler

handler = PromptGuardCallbackHandler(api_key="pg_xxx")

# Attach to an LLM
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-5-nano", callbacks=[handler])

# Or use globally with any chain
chain.invoke({"input": "..."}, config={"callbacks": [handler]})

The handler scans:

  • on_llm_start / on_chat_model_start - prompts before the LLM call
  • on_llm_end - responses after the LLM call
  • on_tool_start - tool inputs for injection attempts
  • on_chain_start/end - tracks chain context

CrewAI

from crewai import Crew, Agent, Task
from promptguard.integrations.crewai import PromptGuardGuardrail

pg = PromptGuardGuardrail(api_key="pg_xxx")

crew = Crew(
    agents=[...],
    tasks=[...],
    before_kickoff=pg.before_kickoff,
    after_kickoff=pg.after_kickoff,
)

crew.kickoff(inputs={"topic": "AI safety"})

You can also wrap individual tools:

from promptguard.integrations.crewai import secure_tool
from crewai.tools import BaseTool

@secure_tool(api_key="pg_xxx")
class SearchTool(BaseTool):
    name = "search"
    description = "Search the web"

    def _run(self, query: str) -> str:
        ...

LlamaIndex

from promptguard.integrations.llamaindex import PromptGuardCallbackHandler
from llama_index.core.callbacks import CallbackManager
from llama_index.core import Settings

pg_handler = PromptGuardCallbackHandler(api_key="pg_xxx")
Settings.callback_manager = CallbackManager([pg_handler])

# All LlamaIndex queries are now scanned

Standalone Guard API

For any language or framework, call the Guard API directly:

from promptguard import GuardClient

guard = GuardClient(api_key="pg_xxx")

# Scan before sending to LLM
decision = guard.scan(
    messages=[{"role": "user", "content": "Hello!"}],
    direction="input",
    model="gpt-5-nano",
)

if decision.blocked:
    print(f"Blocked: {decision.threat_type}")
elif decision.redacted:
    # Use decision.redacted_messages instead of original
    print("Content was redacted")
else:
    # Safe to proceed
    pass

Or via HTTP directly (any language):

curl -X POST https://api.promptguard.co/api/v1/guard \
  -H "Authorization: Bearer pg_xxx" \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [{"role": "user", "content": "Hello!"}],
    "direction": "input",
    "model": "gpt-5-nano"
  }'

Security Scanning

from promptguard import PromptGuard

pg = PromptGuard(api_key="pg_xxx")

# Scan content for threats
result = pg.security.scan("Ignore previous instructions...")
if result["blocked"]:
    print(f"Threat detected: {result['reason']}")

PII Redaction

result = pg.security.redact(
    "My email is john@example.com and SSN is 123-45-6789"
)
print(result["redacted"])
# Output: "My email is [EMAIL] and SSN is [SSN]"

Red Team Testing

from promptguard import PromptGuard

pg = PromptGuard(api_key="pg_xxx")

# Run the autonomous red team agent (LLM-powered mutation)
report = pg.redteam.run_autonomous(
    budget=200,
    target_preset="support_bot:strict",
)
print(f"Grade: {report['grade']}, Bypass rate: {report['bypass_rate']:.0%}")

# Get Attack Intelligence stats
stats = pg.redteam.intelligence_stats()
print(f"Total patterns: {stats['total_patterns']}")

The async client mirrors the same methods:

async with PromptGuardAsync(api_key="pg_xxx") as pg:
    report = await pg.redteam.run_autonomous(budget=200)
    stats = await pg.redteam.intelligence_stats()

Async Support

The PromptGuardAsync client provides a fully asynchronous interface for non-blocking usage in async applications:

from promptguard import PromptGuardAsync

async with PromptGuardAsync(api_key="pg_xxx") as pg:
    response = await pg.chat.completions.create(
        model="gpt-5-nano",
        messages=[{"role": "user", "content": "Hello!"}]
    )

    # Async security scanning
    result = await pg.security.scan("Check this content")

    # Async PII redaction
    redacted = await pg.security.redact("My email is john@example.com")

The async client mirrors the synchronous API - every method available on PromptGuard has an await-able counterpart on PromptGuardAsync.

Retry Logic

Both PromptGuard and PromptGuardAsync support configurable retry behavior for transient failures:

from promptguard import PromptGuard

pg = PromptGuard(
    api_key="pg_xxx",
    max_retries=3,        # Number of retry attempts (default: 2)
    retry_delay=0.5,      # Base delay in seconds between retries (default: 0.25)
)

Retries use exponential backoff starting from retry_delay. Only transient errors (network timeouts, 5xx responses) are retried; client errors (4xx) fail immediately.

Embeddings

Scan and secure embedding requests through the proxy:

from promptguard import PromptGuard

pg = PromptGuard(api_key="pg_xxx")

response = pg.embeddings.create(
    model="text-embedding-3-small",
    input="The quick brown fox jumps over the lazy dog",
)
print(response.data[0].embedding[:5])

Batch embedding requests are also supported:

response = pg.embeddings.create(
    model="text-embedding-3-small",
    input=["First document", "Second document", "Third document"],
)
for item in response.data:
    print(f"Index {item.index}: {len(item.embedding)} dimensions")

Configuration

from promptguard import PromptGuard, Config

config = Config(
    api_key="pg_xxx",
    base_url="https://api.promptguard.co/api/v1/proxy",
    timeout=30.0,
)

pg = PromptGuard(config=config)

Environment Variables

export PROMPTGUARD_API_KEY="pg_xxx"
export PROMPTGUARD_BASE_URL="https://api.promptguard.co/api/v1"

Error Handling

from promptguard import PromptGuard, PromptGuardBlockedError

# Auto-instrumentation
import promptguard
promptguard.init(api_key="pg_xxx")

try:
    response = client.chat.completions.create(...)
except PromptGuardBlockedError as e:
    print(f"Blocked: {e.decision.threat_type}")
    print(f"Event ID: {e.decision.event_id}")

Links

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

promptguard_sdk-1.7.1.tar.gz (38.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

promptguard_sdk-1.7.1-py3-none-any.whl (32.3 kB view details)

Uploaded Python 3

File details

Details for the file promptguard_sdk-1.7.1.tar.gz.

File metadata

  • Download URL: promptguard_sdk-1.7.1.tar.gz
  • Upload date:
  • Size: 38.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for promptguard_sdk-1.7.1.tar.gz
Algorithm Hash digest
SHA256 84f8b7a975be3c251f2af66df1eab1cc77ad399364741bab3776ca8952e3a423
MD5 2a1bd5e27578f3d4eebc76d283320579
BLAKE2b-256 abdcacbbb0cbd7dc5dc341a61fbeccfd398497511f838f9e51ec0acec628c71e

See more details on using hashes here.

Provenance

The following attestation bundles were made for promptguard_sdk-1.7.1.tar.gz:

Publisher: release.yml on acebot712/promptguard-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file promptguard_sdk-1.7.1-py3-none-any.whl.

File metadata

File hashes

Hashes for promptguard_sdk-1.7.1-py3-none-any.whl
Algorithm Hash digest
SHA256 5b8d5af9783afadbf773815a341a01ca53c9ab2aca0bf80f980e73219df1e5f2
MD5 ca78242d6cda74c81bc539806cd522be
BLAKE2b-256 fb1975d9f8a0a1a3734386d85b26ff0da06115f640e8d83f6038c6753c18c32c

See more details on using hashes here.

Provenance

The following attestation bundles were made for promptguard_sdk-1.7.1-py3-none-any.whl:

Publisher: release.yml on acebot712/promptguard-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page