Package to test Prompt Injection Against OpenAI's ChatGPT, Google's Gemini and Azure Open AI
Project description
Prompt Injection Benchmarking
The mother of all prompt injection benchmarking repositories, this is the one you have been waiting for (or soon will be)
Analysing OpenAI and Azure OpenAI GPT-4, Gemini Pro and Azure OpenAI Jailbreak Detection (CyberPunk mode)
This repository contains Python code to analyze the Hugging Face Jailbreak dataset against OpenAI's ChatGPT-4 an Gemini Pro models. The code sends prompts from the dataset, processes and tabulates results.
What is a Prompt Jailbreak attack?
The term 'Jailbreak' is used to describe a situation where a language model is manipulated to generate text that is harmful, inappropriate, or otherwise violates community guidelines. This can include generating hate speech, misinformation, or other harmful content.
Let me tell you a story: Once I tried to Jailbreak my iPhone to side load apps and it worked until a reboot, then I had a paper weigh in the format of an iPhone
In the world of LLMs the term jailbreak
is used very loosely
. Some attacks are just ridiculous and silly, others are serious.
Anxious to check the results?
The table below shows the total injection attack prompts according to the data set and the number of detected attacks by ChatGPT-4 and Gemini Pro.
Latest Run
Prompts | GPT-4 | Gemini | Azure OpenAI | Azure OpenAI w/ Jailbreak Detection |
---|---|---|---|---|
139 | ||||
Detected | 133 | 53 | 136 | |
Not Attack | 6 | 4 | 3 | |
Missed Attack | 0 | 82 | 0 |
Previous runs
Prompts | GPT-4 | Gemini | Azure OpenAI | Azure OpenAI w/ Jailbreak Detection |
---|---|---|---|---|
139 | ||||
Detected | 131 | 49 | 134 | |
Not Attack | TBD | TBD | ||
Missed Attack | TBD | TBD |
many more but...no space
Important Details
- Prompts can be blocked by built-in or explicit safety filters. Therefore the code becomes nuanced and to gather proper results you need to catch certain exceptions (see code)
Next steps (soon)
- Add more models to the analysis.
- Enable Safety filters excplicitly and see variation in results assuming a safety block is consider an
attack
Requirements
To run this code, you need the following:
- Python 3
- OpenAI Python library
- Hugging Face datasets library
- A valid OpenAI API key
- A valid Google API key
- A valid Azure Open AI endpoint and all the config that comes with it
If you do not have all these keys the code will skip that LLM, no need to worry.
Setup
- Clone this repository:
git clone https://github.com/BenderScript/PromptInjectionBench.git
cd PromptInjectionBench
- Create a
.env
Create a .env file in the project root directory that contains your OpenAI API, Azure and Google Keys. If you do not have all these keys the code will skip that LLM, no need to worry.
OPENAI_API_KEY=your key>
#
GOOGLE_API_KEY=<your key>
# Azure
AZURE_OPENAI_API_KEY=your key>
AZURE_MODEL_NAME=gpt-4
AZURE_OPENAI_ENDPOINT=your endpoint>
AZURE_OPENAI_API_VERSION=<your api verion, normally 2023-12-01-preview>
AZURE_OPENAI_DEPLOYMENT=<your deployment name. This is the name you gave when deploying the model>
- Install the required libraries using pip3 and the provided requirements.txt file:
pip3 install -r requirements.txt
Running the Analysis
To analyze the Jailbreak dataset with ChatGPT-4 and Gemini Pro, execute the following Python script:
uvicorn prompt_injection_bench.server:prompt_bench_app --reload --port 9002
If Everything goes well, you should see the following page at http://127.0.0.1:9002
This script loads the dataset, iterates through prompts, sends them to ChatGPT-4, and detects potential injection attacks in the generated responses.
Testing
See the demo below where the App checks a prompt with a malicious URL and injection.
Code Structure
server.py
: The main script that loads the dataset, drives the test, and performs the analysis.gemini*
: The gemini pro codeopenai*
: The openai codeazure*
: The azure code
Skipping "Benign" Prompts
In the interest of time, the code skips prompts labeled as "benign." This helps focus the analysis on potentially harmful prompts where injection attacks might occur.
Tabulating Attacks vs. Detected Attacks
The script tabulates the total number of generated texts and the number of texts where potential injection attacks are detected. It counts these attacks based on the presence of specific keywords such as "I'm sorry," "apologize," or "developed by OpenAI" in the generated text.
Results
The script will print information about each prompt, including its type, the generated text from ChatGPT-4, and whether an injection attack was detected based on the presence of specific keywords.
License
This code is provided under the Apache License 2.0. If you liked this code, cite it and let me know.
For more information about OpenAI's GPT-4 model and the Hugging Face Jailbreak dataset, please refer to the official documentation and sources:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file prompt_injection_bench-0.1.13.tar.gz
.
File metadata
- Download URL: prompt_injection_bench-0.1.13.tar.gz
- Upload date:
- Size: 13.1 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.11.8 Darwin/23.3.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9ceaaa50ab0890d599e43f2e6ebea0de811963668fb91d69f82ebcd48f1ca534 |
|
MD5 | 3e2130fe09614703d689507a1f05a054 |
|
BLAKE2b-256 | aeedec3e9b0322b98bb56e89498c7df1f6c7acd9cba1b4106c142976a7d08a86 |
File details
Details for the file prompt_injection_bench-0.1.13-py3-none-any.whl
.
File metadata
- Download URL: prompt_injection_bench-0.1.13-py3-none-any.whl
- Upload date:
- Size: 13.1 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.11.8 Darwin/23.3.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7b114bc5eed61e34d67d77c3bf635d4b32739a6d8f6a88277411cdc0603db296 |
|
MD5 | 1d60a2e7f27ff097f0c3bc55ffe517e9 |
|
BLAKE2b-256 | 540d074f2c55cbbf9b3abc016292fe1b2da3f78b69e68234c2dd42e18bc4b91e |