PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to adversarial prompt attacks.
Project description
PromptInject
Paper: Ignore Previous Prompt: Attack Techniques For Language Models
Abstract
Transformer-based large language models (LLMs) provide a powerful foundation for natural language tasks in large-scale customer-facing applications. However, studies that explore their vulnerabilities emerging from malicious user interaction are scarce. By proposing PROMPTINJECT, a prosaic alignment framework for mask-based iterative adversarial prompt composition, we examine how GPT-3, the most widely deployed language model in production, can be easily misaligned by simple handcrafted inputs. In particular, we investigate two types of attacks -- goal hijacking and prompt leaking -- and demonstrate that even low-aptitude, but sufficiently ill-intentioned agents, can easily exploit GPT-3’s stochastic nature, creating long-tail risks.
Figure 1: Diagram showing how adversarial user input can derail model instructions. In both attacks,
the attacker aims to change the goal of the original prompt. In goal hijacking, the new goal is to print
a specific target string, which may contain malicious instructions, while in prompt leaking, the new
goal is to print the application prompt. Application Prompt (gray box) shows the original prompt,
where {user_input}
is substituted by the user input. In this example, a user would normally input
a phrase to be corrected by the application (blue boxes). Goal Hijacking and Prompt Leaking (orange
boxes) show malicious user inputs (left) for both attacks and the respective model outputs (right)
when the attack is successful.
Install
Run:
pip install git+https://github.com/agencyenterprise/PromptInject
Usage
See notebooks/Example.ipynb for an example.
Cite
Bibtex:
@misc{ignore_previous_prompt,
doi = {10.48550/ARXIV.2211.09527},
url = {https://arxiv.org/abs/2211.09527},
author = {Perez, Fábio and Ribeiro, Ian},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Ignore Previous Prompt: Attack Techniques For Language Models},
publisher = {arXiv},
year = {2022}
}
Contributing
We appreciate any additional request and/or contribution to PromptInject
. The issues tracker is used to keep a list of features and bugs to be worked on. Please see our contributing documentation for some tips on getting started.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file promptinject-0.1.1.1.tar.gz
.
File metadata
- Download URL: promptinject-0.1.1.1.tar.gz
- Upload date:
- Size: 14.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.11.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b7c1790c75b3ab7f28c891f7d4e60d6cbe7896ab0e6d9a9e80199aac8e21f4fc |
|
MD5 | d6c894676c44c680388da192ff012a86 |
|
BLAKE2b-256 | e39d868cf6b3571334d00741150539ef713700d250324dc5a5ebffeac8272497 |
File details
Details for the file promptinject-0.1.1.1-py3-none-any.whl
.
File metadata
- Download URL: promptinject-0.1.1.1-py3-none-any.whl
- Upload date:
- Size: 14.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.11.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 98d020b6878c0c32703110d19c0b1651868e906da0f3b1472d349923bda5fedf |
|
MD5 | 4754238bce5bb7698d3d46022f2ff46f |
|
BLAKE2b-256 | 76ce394b0237b7b0b77f273ecd5229e7c6bd7bbd9b8184b9f767ec7482495ff7 |