Skip to main content

LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.

Project description

LLM Guard - The Security Toolkit for LLM Interactions

LLM Guard by Protect AI is a comprehensive tool designed to fortify the security of Large Language Models (LLMs).

Documentation | Playground | Changelog

GitHub stars MIT license Code style: black PyPI - Python Version Downloads Downloads

Join Our Slack Community

What is LLM Guard?

LLM-Guard

By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure.

Installation

Begin your journey with LLM Guard by downloading the package:

pip install llm-guard

Getting Started

Important Notes:

  • LLM Guard is designed for easy integration and deployment in production environments. While it's ready to use out-of-the-box, please be informed that we're constantly improving and updating the repository.
  • Base functionality requires a limited number of libraries. As you explore more advanced features, necessary libraries will be automatically installed.
  • Ensure you're using Python version 3.9 or higher. Confirm with: python --version.
  • Library installation issues? Consider upgrading pip: python -m pip install --upgrade pip.

Examples:

Supported scanners

Prompt scanners

Output scanners

Community, Contributing, Docs & Support

LLM Guard is an open source solution. We are committed to a transparent development process and highly appreciate any contributions. Whether you are helping us fix bugs, propose new features, improve our documentation or spread the word, we would love to have you as part of our community.

  • Give us a ⭐️ github star ⭐️ on the top of this page to support what we're doing, it means a lot for open source projects!
  • Read our docs for more info about how to use and customize LLM Guard, and for step-by-step tutorials.
  • Post a Github Issue to submit a bug report, feature request, or suggest an improvement.
  • To contribute to the package, check out our contribution guidelines, and open a PR.

Join our Slack to give us feedback, connect with the maintainers and fellow users, ask questions, get help for package usage or contributions, or engage in discussions about LLM security!

Join Our Slack Community

Production Support

We're eager to provide personalized assistance when deploying your LLM Guard to a production environment.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_guard-0.3.13.tar.gz (70.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_guard-0.3.13-py3-none-any.whl (138.0 kB view details)

Uploaded Python 3

File details

Details for the file llm_guard-0.3.13.tar.gz.

File metadata

  • Download URL: llm_guard-0.3.13.tar.gz
  • Upload date:
  • Size: 70.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.14

File hashes

Hashes for llm_guard-0.3.13.tar.gz
Algorithm Hash digest
SHA256 2dee8438e4513ccefd28562ab5a3e98ba7e895bc3cd6a5c1828a5169a7a1ecd6
MD5 ce13c1167813b09db0f941a6bf4a61a6
BLAKE2b-256 0abdfafac5fc199e46a791d965a18b28e305dcd413820ed29a7ef1303c90e307

See more details on using hashes here.

File details

Details for the file llm_guard-0.3.13-py3-none-any.whl.

File metadata

  • Download URL: llm_guard-0.3.13-py3-none-any.whl
  • Upload date:
  • Size: 138.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.14

File hashes

Hashes for llm_guard-0.3.13-py3-none-any.whl
Algorithm Hash digest
SHA256 28b5e6173c0a7c47a573d48c43d34faeeb0476ddef8d13fa9eaf6210d6a405b4
MD5 9fd4f707b9e839b5f6f83eb2a4f2d368
BLAKE2b-256 5f41656cf7977a99263390d5513c43c929cf2c8422752d7688260e5d94d2aa06

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page