Package to test Prompt Injection Against OpenAI's ChatGPT
Project description
ChatGPT-4 for Jailbreak Dataset Analysis
This repository contains Python code to analyze the Hugging Face Jailbreak dataset using OpenAI's ChatGPT-4 model. The code sends prompts from the dataset to ChatGPT-4 to evaluate responses and detect potential prompt injection attacks.
Anxious to check the results?
As of January 10th 2024, GPT-4 detect all jailbreaks on this dataset. Moreover, the word jailbreak is used very loosely, some jailbreak prompts are of no consequence.
Requirements
To run this code, you need the following:
- Python 3
- OpenAI Python library
- Hugging Face datasets library
- A valid OpenAI API key
Install the required libraries using pip3 and the provided requirements.txt file:
pip3 install -r requirements.txt
Setup
- Clone this repository:
git clone https://github.com/BenderScript/PromptInjectionBench.git
cd PromptInjectionBench
- Create a
.env
file in the project directory and add your OpenAI API key:
OPENAI_API_KEY=your_api_key_here
Running the Analysis
To analyze the Jailbreak dataset with ChatGPT-4 and Gemini Pro, execute the following Python script:
python3 main.py
This script loads the dataset, iterates through prompts, sends them to ChatGPT-4, and detects potential injection attacks in the generated responses.
Code Structure
main.py
: The main script that loads the dataset, defines functions to interact with ChatGPT-4, and performs the analysis.
Skipping "Benign" Prompts
In the interest of time, the code skips prompts labeled as "benign." This helps focus the analysis on potentially harmful prompts where injection attacks might occur.
Tabulating Attacks vs. Detected Attacks
The script tabulates the total number of generated texts and the number of texts where potential injection attacks are detected. It counts these attacks based on the presence of specific keywords such as "I'm sorry," "apologize," or "developed by OpenAI" in the generated text.
Results
The script will print information about each prompt, including its type, the generated text from ChatGPT-4, and whether an injection attack was detected based on the presence of specific keywords.
License
This code is provided under the Apache License 2.0. Feel free to use and modify it as needed.
This analysis is provided as a reference and demonstration of using OpenAI's ChatGPT-4 model for evaluating prompt injection attacks in text datasets.
For more information about OpenAI's GPT-4 model and the Hugging Face Jailbreak dataset, please refer to the official documentation and sources:
These explanations added to the README.md should help users understand why "benign" prompts are skipped and how the code tabulates attacks vs. detected attacks.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file prompt_injection_bench-0.1.4.tar.gz
.
File metadata
- Download URL: prompt_injection_bench-0.1.4.tar.gz
- Upload date:
- Size: 6.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.11.7 Darwin/23.2.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 691c225ca48c3c20a5df0b17860c92e01032c97c4ed1b4350b4aefe9f43be1b8 |
|
MD5 | 8df13a3dde5ab43ebf5ef2d00f33e036 |
|
BLAKE2b-256 | 20f100b45cd392fc31d602d34ca6537513ee9983094531f4bf25623f2509e224 |
File details
Details for the file prompt_injection_bench-0.1.4-py3-none-any.whl
.
File metadata
- Download URL: prompt_injection_bench-0.1.4-py3-none-any.whl
- Upload date:
- Size: 8.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.11.7 Darwin/23.2.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ef04eb45973c065a017bf3dba848e6c6a1d39adf3b2d9b3a7023e4539357d4f6 |
|
MD5 | d8213f9e83b7659c84cf2b183d760535 |
|
BLAKE2b-256 | 24607405cd24469ba580b4710ce6fb1e557e50921ba72b5e9c37a1a0c9314bfb |