A package for detecting prompt injections in text using Open-Source LLMs.
Project description
Pytector
Pytector is a Python package designed to detect prompt injection in text inputs using state-of-the-art machine learning models from the transformers library.
Disclaimer
Pytector is still a prototype and cannot provide 100% protection against prompt injection attacks!
Features
- Detect prompt injections with pre-trained models.
- Support for multiple models including DeBERTa, DistilBERT, and ONNX versions.
- Easy-to-use interface with customizable threshold settings.
Installation
pip install pytector
Install Pytector directly from the source code:
git clone https://github.com/MaxMLang/pytector.git
cd pytector
pip install .
Usage
To use Pytector, you can import the PromptInjectionDetector
class and create an instance with a pre-defined model or a custom model URL.
import pytector
# Initialize the detector with a pre-defined model
detector = pytector.PromptInjectionDetector(model_name_or_url="deberta")
# Check if a prompt is a potential injection
is_injection, probability = detector.detect_injection("Your suspicious prompt here")
print(f"Is injection: {is_injection}, Probability: {probability}")
Documentation
For full documentation, visit the docs
directory.
Contributing
Contributions are welcome! Please read our Contributing Guide for details on our code of conduct, and the process for submitting pull requests.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.