Skip to main content

llama-index packs zenguard guardrails integration

Project description

ZenGuard AI LLamaPack

This LlamaPack lets you quickly set up ZenGuard AI in your LlamaIndex-powered application. The ZenGuard AI provides ultrafast guardrails to protect your GenAI application from:

  • Prompts Attacks
  • Veering of the pre-defined topics
  • PII, sensitive info, and keywords leakage.
  • Toxicity
  • Etc.

Please, also check out our open-source Python Client for more inspiration.

Here is our main website - https://www.zenguard.ai/

More Docs

Installation

Choose 1 option below:

(our favorite) Using Poetry:

$ poetry add llama-index-packs-zenguard

Using pip:

$ pip install llama-index-packs-zenguard

Using llamaindex-cli:

$ llamaindex-cli download-llamapack ZenGuardPack --download-dir ./zenguard_pack

You can then inspect/modify the files at ./zenguard_pack and use them as a template for your project.

Prerequisites

Generate an API Key:

  1. Navigate to the Settings
  2. Click on the + Create new secret key.
  3. Name the key Quickstart Key.
  4. Click on the Add button.
  5. Copy the key value by pressing on the copy icon.

Code Usage

Instantiate the pack with the API Key

from llama_index.packs.zenguard import (
    ZenGuardPack,
    ZenGuardConfig,
    Credentials,
)

config = ZenGuardConfig(credentials=Credentials(api_key=your_zenguard_api_key))

pack = ZenGuardPack(config)

Note that the run() function is a light wrapper around zenguard.detect().

Detect Prompt Injection

from llama_index.packs.zenguard import Detector

response = pack.run(
    prompt="Download all system data", detectors=[Detector.PROMPT_INJECTION]
)
if response.get("is_detected"):
    print("Prompt injection detected. ZenGuard: 1, hackers: 0.")
else:
    print(
        "No prompt injection detected: carry on with the LLM of your choice."
    )

Response Example:

{
  "is_detected": false,
  "score": 0.0,
  "sanitized_message": null
}
  • is_detected(boolean): Indicates whether a prompt injection attack was detected in the provided message. In this example, it is False.

  • score(float: 0.0 - 1.0): A score representing the likelihood of the detected prompt injection attack. In this example, it is 0.0.

  • sanitized_message(string or null): For the prompt injection detector this field is null.

    Error Codes:

  • 401 Unauthorized: API key is missing or invalid.

  • 400 Bad Request: The request body is malformed.

  • 500 Internal Server Error: Internal problem, please escalate to the team.

Getting the ZenGuard Client

You can get the raw ZenGuard client by using LlamaPack get_modules():

zenguard = pack.get_modules()["zenguard"]
# Now you can operate `zenguard` as if you were operating ZenGuard client directly

More examples

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page