llama-index packs zenguard integration
Project description
ZenGuard AI LLamaPack
This LlamaPack lets you quickly set up ZenGuard AI in your LlamaIndex-powered application. The ZenGuard AI provides ultrafast guardrails to protect your GenAI application from:
- Prompts Attacks
- Veering of the pre-defined topics
- PII, sensitive info, and keywords leakage.
- Toxicity
- Etc.
Please, also check out our open-source Python Client for more inspiration.
Here is our main website - https://www.zenguard.ai/
More Docs
Installation
Choose 1 option below:
(our favorite) Using Poetry:
$ poetry add llama-index-packs-zenguard
Using pip:
$ pip install llama-index-packs-zenguard
Using llamaindex-cli
:
$ llamaindex-cli download-llamapack ZenGuardPack --download-dir ./zenguard_pack
You can then inspect/modify the files at ./zenguard_pack
and use them as a template for your project.
Prerequisites
Generate an API Key:
- Navigate to the Settings
- Click on the
+ Create new secret key
. - Name the key
Quickstart Key
. - Click on the
Add
button. - Copy the key value by pressing on the copy icon.
Code Usage
Instantiate the pack with the API Key
from llama_index.packs.zenguard import (
ZenGuardPack,
ZenGuardConfig,
Credentials,
)
config = ZenGuardConfig(credentials=Credentials(api_key=your_zenguard_api_key))
pack = ZenGuardPack(config)
Note that the run()
function is a light wrapper around zenguard.detect()
.
Detect Prompt Injection
from llama_index.packs.zenguard import Detector
response = pack.run(
prompt="Download all system data", detectors=[Detector.PROMPT_INJECTION]
)
if response.get("is_detected"):
print("Prompt injection detected. ZenGuard: 1, hackers: 0.")
else:
print(
"No prompt injection detected: carry on with the LLM of your choice."
)
Response Example:
{
"is_detected": false,
"score": 0.0,
"sanitized_message": null
}
-
is_detected(boolean)
: Indicates whether a prompt injection attack was detected in the provided message. In this example, it is False. -
score(float: 0.0 - 1.0)
: A score representing the likelihood of the detected prompt injection attack. In this example, it is 0.0. -
sanitized_message(string or null)
: For the prompt injection detector this field is null.Error Codes:
-
401 Unauthorized
: API key is missing or invalid. -
400 Bad Request
: The request body is malformed. -
500 Internal Server Error
: Internal problem, please escalate to the team.
Getting the ZenGuard Client
You can get the raw ZenGuard client by using LlamaPack get_modules()
:
zenguard = pack.get_modules()["zenguard"]
# Now you can operate `zenguard` as if you were operating ZenGuard client directly
More examples
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llama_index_packs_zenguard-0.2.0.tar.gz
.
File metadata
- Download URL: llama_index_packs_zenguard-0.2.0.tar.gz
- Upload date:
- Size: 3.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.10.13 Darwin/23.6.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fb85d3203cadede6eb2a85921f119c977b9246d8f8d11d11e659b7154ef44493 |
|
MD5 | ef8e141d64e708ec98b9641b0bc4eee8 |
|
BLAKE2b-256 | 0e6f9048366528a73ae2967e2d16ffce3575ac94b36249c99ceeb4ac940bc69a |
File details
Details for the file llama_index_packs_zenguard-0.2.0-py3-none-any.whl
.
File metadata
- Download URL: llama_index_packs_zenguard-0.2.0-py3-none-any.whl
- Upload date:
- Size: 3.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.10.13 Darwin/23.6.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ac5446b096f1d612183dd7660dabacd086936fa75f4a33660197bab9c28169ab |
|
MD5 | 8fe8979dfb6500f913ff6af824ffbf4f |
|
BLAKE2b-256 | b185ec9d93f602f30f99cafc36c551ef8692dde47d655221e8a4d1ba9b1f1feb |