Skip to main content

Promptceptor tool

Project description

Promptceptor

DoomArena Promptceptor (prompt interceptor) is a minimalistic tool for prompt engineering, red-teaming and debugging of AI agents.

Overview

Promptceptor works by monkey-patching common LLM API clients such as OpenAI and LiteLLM to track and store the prompt, parameters and completion of every LLM call in a simple folder structure on the disk. Streaming mode is supported.

The calls can then be modified and replayed for quick prototyping of prompt injection attacks and prompt-based defenses, with the option of changing the model and sampling parameters.

Supported clients

  • OpenAI Chat Completions API: non-streaming and streaming.
  • OpenAI Responses API: non-streaming and streaming # Function calls may not be supported yet
  • LiteLLM: non-streaming and streaming

Quick start

  1. Install this package
pip install doomarena-promptceptor

Or install it locally for development

pip install -e doomarena/promptceptor
pytest doomarena/promptceptor  # run the tests (may require some API keys)
  1. Add a single line to your main script to monkey-patch calls to the LLM API of your choice
from doomarena.promptceptor.integrations.openai_chat import OpenAIChatPatcher
from openai import OpenAI
from pathlib import Path

# Add this in main thread / initialization / setup function
output_folder = OpenAIChatPatcher(log_dir=Path('logs')).patch_client() 

# All subsequent calls will be logged to output_folder
client = OpenAI()
print("\n▶ Running non-streaming chat completion...")
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {
            "role": "user",
            "content": "Write a one-sentence bedtime story about a unicorn."
        }
    ],
    stream=False
)

Check out the examples in ./src/doomarena/promptceptor/examples for more info on supported LLM API Clients.

  1. Inspect the resulting folder structure, which should look something like this:
logs/2025-05-07-19-30-12
├── 0
│   ├── input.yaml
│   └── output.txt
└── 1
│   ├── input.yaml
│   └── output.txt
├── 2
│   ├── input.yaml
│   └── output.txt
...

Each call to the LLM API will result in a new subfolder (e.g. 0, 1, 2) containing the input call to the LLM input.yaml and the raw output output.txt. Multithreading and multiprocessing is supported but may result in gaps in the indices or several subfolders (not a big deal).

  1. Modify and recompute.

If you're curious how a different input may have affected you can modify the prompt messages inside input.yaml, as well as the model (e.g. switch from gpt-4o to claude), temperature, and any other **kwargs exposed by the LLM API client.

Then, recompute the outputs with

promptceptor path/to/logs

Promptceptor will recompute the output if output.txt is missing or the input.yaml timestamp is newer (see --overwrite parameter for more details).

Examples

You can run and inspect examples of patching OpenAI and LiteLLM clients

# Export relevant API keys here
OPENAI_API_KEY=...
OPENROUTER_API_KEY=...

python -m doomarena.promptceptor.scripts.litellm_example
python -m doomarena.promptceptor.scripts.openai_example

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

doomarena_promptceptor-0.0.4.1.tar.gz (11.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

doomarena_promptceptor-0.0.4.1-py3-none-any.whl (14.3 kB view details)

Uploaded Python 3

File details

Details for the file doomarena_promptceptor-0.0.4.1.tar.gz.

File metadata

  • Download URL: doomarena_promptceptor-0.0.4.1.tar.gz
  • Upload date:
  • Size: 11.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for doomarena_promptceptor-0.0.4.1.tar.gz
Algorithm Hash digest
SHA256 342dae557b12ffb15fd9d475a1f56b07efa1c2085b46902c21e9762334f34a70
MD5 70e29c07cd92e5f232e5ab595713712a
BLAKE2b-256 358263031db4c6c116c74d837ac4f61994225174ff6a68ae44c421d53c9397cb

See more details on using hashes here.

File details

Details for the file doomarena_promptceptor-0.0.4.1-py3-none-any.whl.

File metadata

File hashes

Hashes for doomarena_promptceptor-0.0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 7f941afc7675b99076f71b4a1c796a53ac1c9d96cf7f402c1f63cb3095ff9954
MD5 c37377182f91ec7619c46eece70927e1
BLAKE2b-256 8d7a2b45cd1d454966f86f77b1f2f0f92733f2f8f24c9e6f90f018f1771f22af

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page