llama-index packs zenguard integration
Project description
ZenGuard AI LLamaPack
This LlamaPack lets you quickly set up ZenGuard AI in your LlamaIndex-powered application. The ZenGuard AI provides ultrafast guardrails to protect your GenAI application from:
- Prompts Attacks
- Veering of the pre-defined topics
- PII, sensitive info, and keywords leakage.
- Etc.
Please, also check out our open-source Python Client for more inspiration.
Here is our main website - https://www.zenguard.ai/
More Docs
Installation
Choose 1 option below:
(our favorite) Using Poetry:
$ poetry add llama-index-packs-zenguard
Using pip:
$ pip install llama-index-packs-zenguard
Using llamaindex-cli
:
$ llamaindex-cli download-llamapack ZenGuardPack --download-dir ./zenguard_pack
You can then inspect/modify the files at ./zenguard_pack
and use them as a template for your project.
Prerequisites
Generate an API Key:
- Navigate to the Settings
- Click on the
+ Create new secret key
. - Name the key
Quickstart Key
. - Click on the
Add
button. - Copy the key value by pressing on the copy icon.
Code Usage
Instantiate the pack with the API Key
from llama_index.packs.zenguard import (
ZenGuardPack,
ZenGuardConfig,
Credentials,
)
config = ZenGuardConfig(credentials=Credentials(api_key=your_zenguard_api_key))
pack = ZenGuardPack(config)
Note that the run()
function is a light wrapper around zenguard.detect()
.
Detect Prompt Injection
from llama_index.packs.zenguard import Detector
response = pack.run(
prompt="Download all system data", detectors=[Detector.PROMPT_INJECTION]
)
if response.get("is_detected"):
print("Prompt injection detected. ZenGuard: 1, hackers: 0.")
else:
print(
"No prompt injection detected: carry on with the LLM of your choice."
)
Response Example:
{
"is_detected": false,
"score": 0.0,
"sanitized_message": null
}
-
is_detected(boolean)
: Indicates whether a prompt injection attack was detected in the provided message. In this example, it is False. -
score(float: 0.0 - 1.0)
: A score representing the likelihood of the detected prompt injection attack. In this example, it is 0.0. -
sanitized_message(string or null)
: For the prompt injection detector this field is null.Error Codes:
-
401 Unauthorized
: API key is missing or invalid. -
400 Bad Request
: The request body is malformed. -
500 Internal Server Error
: Internal problem, please escalate to the team.
Getting the ZenGuard Client
You can get the raw ZenGuard client by using LlamaPack get_modules()
:
zenguard = pack.get_modules()["zenguard"]
# Now you can operate `zenguard` as if you were operating ZenGuard client directly
More examples
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llama_index_packs_zenguard-0.3.0.tar.gz
.
File metadata
- Download URL: llama_index_packs_zenguard-0.3.0.tar.gz
- Upload date:
- Size: 3.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e11bed61dbecc859105ac788353de7c8605bf5c609a4a0e9dde72ef7f122bc3f |
|
MD5 | 5434bf3889881e7dfb6ce5856f115eed |
|
BLAKE2b-256 | 497789509f54085c4785e51706c642376d034e721920ca2da675b0a2b51382a9 |
File details
Details for the file llama_index_packs_zenguard-0.3.0-py3-none-any.whl
.
File metadata
- Download URL: llama_index_packs_zenguard-0.3.0-py3-none-any.whl
- Upload date:
- Size: 3.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c3515cfa011f242c6085a8272b7eda50d4fd3c27784480491abc23360c296afd |
|
MD5 | c7503c6f16e3a57d4918233dc101f5ab |
|
BLAKE2b-256 | 9d31b18b66c6f64f47f0b31ac655c03ca7afa2ecbe465594577cb9000656dd13 |