LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.
Project description
LLM Guard - The Security Toolkit for LLM Interactions
LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.
❤️ Proudly developed by the Laiyer.ai team.
Installation
Begin your journey with LLM Guard by downloading the package and acquiring the en_core_web_trf
spaCy model (essential
for the Anonymize scanner):
pip install llm-guard
python -m spacy download en_core_web_trf
Getting Started
Important Notes:
- LLM Guard is designed for easy integration and deployment in production environments. While it's ready to use out-of-the-box, please be informed that we're constantly improving and updating the repository.
- Base functionality requires a limited number of libraries. As you explore more advanced features, necessary libraries will be automatically installed.
- Ensure you're using Python version 3.8.1 or higher. Confirm with:
python --version
. - Library installation issues? Consider upgrading pip:
python -m pip install --upgrade pip
.
Examples:
- Get started with ChatGPT and LLM Guard.
Supported scanners
Prompt scanners
Output scanners
- BanSubstrings
- BanTopics
- Bias
- Code
- Deanonymize
- MaliciousURLs
- NoRefusal
- Refutation
- Regex
- Relevance
- Sensitive
- Sentiment
- Toxicity
Roadmap
General:
- Introduce support of GPU
- Improve documentation by showing use-cases, benchmarks, etc
- Hosted version of LLM Guard
- Text statistics to provide on prompt and output
- Support more languages
- Accept multiple outputs instead of one to compare
- Support streaming mode
Prompt Scanner:
- Integrate with Perspective API for Toxicity scanner
- Develop language restricting scanner
Output Scanner:
- Develop output scanners for the format (e.g. max length, correct JSON, XML, etc)
- Develop factual consistency scanner
- Develop libraries hallucination scanner
- Develop libraries licenses scanner
Contributing
Got ideas, feedback, or wish to contribute? We'd love to hear from you! Email us.
For detailed guidelines on contributions, kindly refer to our contribution guide.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for llm_guard-0.1.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6bfabbae565aadee37e72ccfe98516ea5a10d97cc89d1768ce86b58e0d15f21a |
|
MD5 | 24dab29a09ec1a7e108fee0711d4f0b4 |
|
BLAKE2b-256 | 6f6a95746905d8762a2becb990af0fbf4b0a24e8d377c78dd56e74b4cdbdf1ce |