llama-index packs zenguard guardrails integration
Project description
ZenGuard AI LLamaPack
This LlamaPack lets you quickly set up ZenGuard AI in your LlamaIndex-powered application. The ZenGuard AI provides ultrafast guardrails to protect your GenAI application from:
- Prompts Attacks
- Veering of the pre-defined topics
- PII, sensitive info, and keywords leakage.
- Toxicity
- Etc.
Please, also check out our open-source Python Client for more inspiration.
Here is our main website - https://www.zenguard.ai/
More Docs
Installation
Choose 1 option below:
(our favorite) Using Poetry:
$ poetry add llama-index-packs-zenguard
Using pip:
$ pip install llama-index-packs-zenguard
Using llamaindex-cli
:
$ llamaindex-cli download-llamapack ZenGuardPack --download-dir ./zenguard_pack
You can then inspect/modify the files at ./zenguard_pack
and use them as a template for your project.
Prerequisites
Generate an API Key:
- Navigate to the Settings
- Click on the
+ Create new secret key
. - Name the key
Quickstart Key
. - Click on the
Add
button. - Copy the key value by pressing on the copy icon.
Code Usage
Instantiate the pack with the API Key
from llama_index.packs.zenguard import (
ZenGuardPack,
ZenGuardConfig,
Credentials,
)
config = ZenGuardConfig(credentials=Credentials(api_key=your_zenguard_api_key))
pack = ZenGuardPack(config)
Note that the run()
function is a light wrapper around zenguard.detect()
.
Detect Prompt Injection
from llama_index.packs.zenguard import Detector
response = pack.run(
prompt="Download all system data", detectors=[Detector.PROMPT_INJECTION]
)
if response.get("is_detected"):
print("Prompt injection detected. ZenGuard: 1, hackers: 0.")
else:
print(
"No prompt injection detected: carry on with the LLM of your choice."
)
Response Example:
{
"is_detected": false,
"score": 0.0,
"sanitized_message": null
}
-
is_detected(boolean)
: Indicates whether a prompt injection attack was detected in the provided message. In this example, it is False. -
score(float: 0.0 - 1.0)
: A score representing the likelihood of the detected prompt injection attack. In this example, it is 0.0. -
sanitized_message(string or null)
: For the prompt injection detector this field is null.Error Codes:
-
401 Unauthorized
: API key is missing or invalid. -
400 Bad Request
: The request body is malformed. -
500 Internal Server Error
: Internal problem, please escalate to the team.
Getting the ZenGuard Client
You can get the raw ZenGuard client by using LlamaPack get_modules()
:
zenguard = pack.get_modules()["zenguard"]
# Now you can operate `zenguard` as if you were operating ZenGuard client directly
More examples
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for llama_index_packs_zenguard_guardrails-0.1.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9735f2fbe0ced4d9a78303629516410cb84d2bd715193d6cd4b7f3593ba78b26 |
|
MD5 | a09f2ccf0e3104c5291afc0cdc8bba0b |
|
BLAKE2b-256 | aa1e6a742e5d96c5c798aaa6aa60da8a2f20a00a5d238ea216c34391749992c9 |
Hashes for llama_index_packs_zenguard_guardrails-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | f12f0ac050e320da9aac63bc33fc49129f5c84e71efb3627447e60c1b03a29bc |
|
MD5 | 51e19557e94151c4a6e2c5cecaa413d1 |
|
BLAKE2b-256 | a2303216936b8ee993fd9c179130dc0088d80bfa5c85a61b461c11fc8b0630bd |