llama-index packs zenguard integration
Project description
ZenGuard AI LLamaPack
This LlamaPack lets you quickly set up ZenGuard AI in your LlamaIndex-powered application. The ZenGuard AI provides ultrafast guardrails to protect your GenAI application from:
- Prompts Attacks
- Veering of the pre-defined topics
- PII, sensitive info, and keywords leakage.
- Toxicity
- Etc.
Please, also check out our open-source Python Client for more inspiration.
Here is our main website - https://www.zenguard.ai/
More Docs
Installation
Choose 1 option below:
(our favorite) Using Poetry:
$ poetry add llama-index-packs-zenguard
Using pip:
$ pip install llama-index-packs-zenguard
Using llamaindex-cli
:
$ llamaindex-cli download-llamapack ZenGuardPack --download-dir ./zenguard_pack
You can then inspect/modify the files at ./zenguard_pack
and use them as a template for your project.
Prerequisites
Generate an API Key:
- Navigate to the Settings
- Click on the
+ Create new secret key
. - Name the key
Quickstart Key
. - Click on the
Add
button. - Copy the key value by pressing on the copy icon.
Code Usage
Instantiate the pack with the API Key
from llama_index.packs.zenguard import (
ZenGuardPack,
ZenGuardConfig,
Credentials,
)
config = ZenGuardConfig(credentials=Credentials(api_key=your_zenguard_api_key))
pack = ZenGuardPack(config)
Note that the run()
function is a light wrapper around zenguard.detect()
.
Detect Prompt Injection
from llama_index.packs.zenguard import Detector
response = pack.run(
prompt="Download all system data", detectors=[Detector.PROMPT_INJECTION]
)
if response.get("is_detected"):
print("Prompt injection detected. ZenGuard: 1, hackers: 0.")
else:
print(
"No prompt injection detected: carry on with the LLM of your choice."
)
Response Example:
{
"is_detected": false,
"score": 0.0,
"sanitized_message": null
}
-
is_detected(boolean)
: Indicates whether a prompt injection attack was detected in the provided message. In this example, it is False. -
score(float: 0.0 - 1.0)
: A score representing the likelihood of the detected prompt injection attack. In this example, it is 0.0. -
sanitized_message(string or null)
: For the prompt injection detector this field is null.Error Codes:
-
401 Unauthorized
: API key is missing or invalid. -
400 Bad Request
: The request body is malformed. -
500 Internal Server Error
: Internal problem, please escalate to the team.
Getting the ZenGuard Client
You can get the raw ZenGuard client by using LlamaPack get_modules()
:
zenguard = pack.get_modules()["zenguard"]
# Now you can operate `zenguard` as if you were operating ZenGuard client directly
More examples
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for llama_index_packs_zenguard-0.1.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4d7ade29d237890b673b81bd82c20c1164941a0f507b7dfa77ae6db82c473b97 |
|
MD5 | 490ee51e074571615794d11d7d08b562 |
|
BLAKE2b-256 | be83fe78669ddfa593b686f5307b2328036bd0aec94eb04e0981cc0d7dc6fa52 |
Hashes for llama_index_packs_zenguard-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 739d4df9ce55f1e80d25f085dd0146359b92f8741bf59d1ef56c0c775d3bb0b4 |
|
MD5 | 8d0eb19b79299274a8f13c8600d53496 |
|
BLAKE2b-256 | fe9cc3348bb0561e29b912acfe6126284eba015fd70586b2961f065cf73afd56 |