Skip to main content

LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection and jailbreak attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.

Project description

LLM-Guard Logo

LLM Guard - The Security Toolkit for LLM Interactions

LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection and jailbreak attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.

MIT license Code style: black PyPI - Python Version Downloads Downloads Twitter

❤️ Proudly developed by the Laiyer.ai team.

Installation

Begin your journey with LLM Guard by downloading the package and acquiring the en_core_web_trf spaCy model (essential for the Anonymize scanner):

pip install llm-guard
python -m spacy download en_core_web_trf

Getting Started

Important Notes:

  • LLM Guard is designed for easy integration and deployment in production environments. While it's ready to use out-of-the-box, please be informed that we're constantly improving and updating the repository.
  • Base functionality requires a limited number of libraries. As you explore more advanced features, necessary libraries will be automatically installed.
  • Ensure you're using Python version 3.8.1 or higher. Confirm with: python --version.
  • Library installation issues? Consider upgrading pip: python -m pip install --upgrade pip.

Examples:

Supported scanners

Prompt scanners

Output scanners

Roadmap

General:

  • Calculate risk score from 0 to 1 for each scanner
  • Improve speed of transformers

Prompt Scanner:

  • Improve Jailbreak scanner
  • Better anonymizer with improved secrets detection and entity recognition
  • Use Perspective API for Toxicity scanner

Output Scanner:

  • Develop Fact Checking scanner
  • Develop Hallucination scanner
  • Develop scanner to check if the output stays on the topic of the prompt.

Contributing

Got ideas, feedback, or wish to contribute? We'd love to hear from you! Email us.

For detailed guidelines on contributions, kindly refer to our contribution guide.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm-guard-0.1.0.tar.gz (90.6 kB view hashes)

Uploaded Source

Built Distribution

llm_guard-0.1.0-py3-none-any.whl (111.1 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page