Skip to main content

Protect OpenAI and Anthropic API calls from prompt injection, jailbreaks, and data-extraction attacks.

Project description

llmgateways

Python SDK for LLM Gateways — protect OpenAI and Anthropic API calls from prompt injection, jailbreaks, and data-extraction attacks.

PyPI Python License: MIT

Installation

pip install llmgateways            # core only
pip install "llmgateways[openai]"  # + OpenAI
pip install "llmgateways[anthropic]"  # + Anthropic
pip install "llmgateways[all]"     # + both

Quick start

OpenAI

from llmgateways import wrap, PromptBlockedError
from openai import OpenAI

client = wrap(OpenAI(), api_key="lgk_...")

try:
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": "Hello!"}],
    )
    print(response.choices[0].message.content)
except PromptBlockedError as e:
    print(f"Blocked! Threats: {e.result.threats}")
    print(f"Risk score: {e.result.risk_score:.2f}")

Anthropic

from llmgateways import wrap, PromptBlockedError
from anthropic import Anthropic

client = wrap(Anthropic(), api_key="lgk_...")

try:
    response = client.messages.create(
        model="claude-3-5-sonnet-20241022",
        system="You are a helpful assistant.",
        messages=[{"role": "user", "content": "Hello!"}],
        max_tokens=1024,
    )
    print(response.content[0].text)
except PromptBlockedError as e:
    print(f"Blocked! Threats: {e.result.threats}")

Async

Both OpenAI and Anthropic async clients are supported:

from llmgateways import wrap, PromptBlockedError
from openai import AsyncOpenAI

client = wrap(AsyncOpenAI(), api_key="lgk_...")

async def main():
    try:
        response = await client.chat.completions.create_async(
            model="gpt-4o",
            messages=[{"role": "user", "content": "Hello!"}],
        )
    except PromptBlockedError as e:
        print("Blocked:", e.result.threats)

How it works

Every call to chat.completions.create or messages.create is intercepted:

  1. The prompt is sent to the LLM Gateways detection engine
  2. L1 — pattern matching (instant, <1 ms)
  3. L2 — semantic similarity via MiniLM embedding model
  4. L3 — LLM judge (DeepSeek) for ambiguous cases
  5. If blocked → PromptBlockedError is raised before the model is called
  6. If allowed → the original call proceeds unchanged

API reference

wrap(client, *, api_key, base_url="", timeout=10.0)

Returns a protected proxy with the same interface as the original client.

Parameter Type Description
client OpenAI | Anthropic The LLM client to protect
api_key str Your lgk_... key from the dashboard
base_url str Override for self-hosted deployments
timeout float Gateway request timeout in seconds (default: 10)

PromptBlockedError

except PromptBlockedError as e:
    e.result.risk_score   # float 0.0–1.0
    e.result.action       # "block"
    e.result.threats      # list[str], e.g. ["jailbreak", "injection"]
    e.result.layer_used   # int (1, 2, or 3)
    e.result.reasoning    # str | None (populated by L3 LLM judge)
    e.result.latency_ms   # int

LLMGatewaysClient

Use directly if you want to call the scan API without wrapping a client:

from llmgateways import LLMGatewaysClient

gw = LLMGatewaysClient(api_key="lgk_...")
result = gw.scan("Hello!", system_prompt="You are helpful", model="gpt-4o")
print(result.action)  # "allow" or "block"

# Async
result = await gw.scan_async("Hello!")

Get an API key

Sign up at llmgateways.com → Dashboard → API Keys → Create key.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmgateways-0.1.0.tar.gz (8.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmgateways-0.1.0-py3-none-any.whl (9.8 kB view details)

Uploaded Python 3

File details

Details for the file llmgateways-0.1.0.tar.gz.

File metadata

  • Download URL: llmgateways-0.1.0.tar.gz
  • Upload date:
  • Size: 8.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for llmgateways-0.1.0.tar.gz
Algorithm Hash digest
SHA256 03e72be44039789ebc40fc9754586ce2febc88b1e38e69ad2f952fb621669772
MD5 6c3eeee856e85c5fdf604782bf84ce0e
BLAKE2b-256 fc87564e87bbc2870c165944ff5cfa185353ff546b185a9cbb74301e0dcf3f6f

See more details on using hashes here.

File details

Details for the file llmgateways-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: llmgateways-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 9.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for llmgateways-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a6087439ad09e718f145cf5ee17dee5f6048c37a7f44dc34fc958d1e2c69a991
MD5 6804d81003c3da7d86b61543604269d7
BLAKE2b-256 c546d6f8a157bd5834776c632702ab09c8ea4c2a803f990f6dc7e762928626c9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page