Skip to main content

Promptceptor tool

Project description

Promptceptor

DoomArena Promptceptor (prompt interceptor) is a minimalistic tool for prompt engineering, red-teaming and debugging of AI agents.

Overview

Promptceptor works by monkey-patching common LLM API clients such as OpenAI and LiteLLM to track and store the prompt, parameters and completion of every LLM call in a simple folder structure on the disk. Streaming mode is supported.

The calls can then be modified and replayed for quick prototyping of prompt injection attacks and prompt-based defenses, with the option of changing the model and sampling parameters.

Supported clients

  • OpenAI Chat Completions API: non-streaming and streaming.
  • OpenAI Responses API: non-streaming and streaming # Function calls may not be supported yet
  • LiteLLM: non-streaming and streaming

Quick start

  1. Install this package
pip install doomarena-promptceptor

Or install it locally for development

pip install -e doomarena/promptceptor
pytest doomarena/promptceptor  # run the tests (may require some API keys)
  1. Add a single line to your main script to monkey-patch calls to the LLM API of your choice
from doomarena.promptceptor.integrations.openai_chat import OpenAIChatPatcher

# Add this in main thread / initialization / setup function
output_folder = OpenAIChatPatcher(log_dir='logs').patch_client() 

# All subsequent calls will be logged to output_folder
client = OpenAI()
print("\n▶ Running non-streaming chat completion...")
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {
            "role": "user",
            "content": "Write a one-sentence bedtime story about a unicorn."
        }
    ],
    stream=False
)

Check out the examples in ./src/doomarena/promptceptor/examples for more info on supported LLM API Clients.

  1. Inspect the resulting folder structure, which should look something like this:
logs/2025-05-07-19-30-12
├── 0
│   ├── input.yaml
│   └── output.txt
└── 1
│   ├── input.yaml
│   └── output.txt
├── 2
│   ├── input.yaml
│   └── output.txt
...

Each call to the LLM API will result in a new subfolder (e.g. 0, 1, 2) containing the input call to the LLM input.yaml and the raw output output.txt. Multithreading and multiprocessing is supported but may result in gaps in the indices or several subfolders (not a big deal).

  1. Modify and recompute.

If you're curious how a different input may have affected you can modify the prompt messages inside input.yaml, as well as the model (e.g. switch from gpt-4o to claude), temperature, and any other **kwargs exposed by the LLM API client.

Then, recompute the outputs with

promptceptor path/to/logs

Promptceptor will recompute the output if output.txt is missing or the input.yaml timestamp is newer (see --overwrite parameter for more details).

Examples

You can run and inspect examples of patching OpenAI and LiteLLM clients

# Export relevant API keys here
OPENAI_API_KEY=...
OPENROUTER_API_KEY=...

python -m doomarena.promptceptor.scripts.litellm_example
python -m doomarena.promptceptor.scripts.openai_example

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

doomarena_promptceptor-0.0.4.tar.gz (11.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

doomarena_promptceptor-0.0.4-py3-none-any.whl (13.9 kB view details)

Uploaded Python 3

File details

Details for the file doomarena_promptceptor-0.0.4.tar.gz.

File metadata

  • Download URL: doomarena_promptceptor-0.0.4.tar.gz
  • Upload date:
  • Size: 11.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.10

File hashes

Hashes for doomarena_promptceptor-0.0.4.tar.gz
Algorithm Hash digest
SHA256 298ba4281efc13fff4ba03c55957918b8b153ab31889a438d47b91c970cfa19b
MD5 05b7527fe3e40bc9bf22644f67b7f092
BLAKE2b-256 3c70619ed959ebd47ae1226e6b839b772971d5746a92a8d19cebf9f898954376

See more details on using hashes here.

File details

Details for the file doomarena_promptceptor-0.0.4-py3-none-any.whl.

File metadata

File hashes

Hashes for doomarena_promptceptor-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 46d51b10fca4f32cdf12dc5da184a1095782eec2ffd3a7288394abc52d70ccb1
MD5 ceffde96f1521ea63a7cf2f27ead45e3
BLAKE2b-256 9d34cac1649b74f0ebc9ed6cd321d8586061347b9b53d0a30908022e694550ba

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page