Protecting GenAI from Prompt Injection
Project description
Panoptica GenAI Protection SDK
A simple python client SDK for integration with Panoptica GenAI Protection.
GenAI Protection is part of Panoptica, a cloud native application protection platform (CNAPP), and provides protection for LLM-backed systems. Specifically, the GenAI Protection SDK inspects both input and output prompts, flagging those it identifies as likely containing malicious content with a high degree of certainty.
The Python SDK is provided to programmatically integrate your system with our LLM protection software, enabling you to verify the safety level of processing a user requested prompt before actually processing it. Following this evaluation, the application can then determine the appropriate subsequent steps based on your policy.
Installation
pip install panoptica_genai_protection
Usage Example
Working assumptions:
- You have generated a key-pair for GenAI Protection in the Panoptica settings screen
- The access key is set in the GENAI_PROTECTION_ACCESS_KEY environment variable
- The secret key is set in the GENAI_PROTECTION_SECRET_KEY environment variable
- We denote the call to generating the LLM response as get_llm_response()
GenAIProtectionClient provides the check_llm_prompt method to determine the safety level of a given prompt.
Sample Snippet
from panoptica_genai_protection.client import GenAIProtectionClient
from panoptica_genai_protection.gen.models import Result as InspectionResult
# ... Other code in your module ...
# initialize the client
genai_protection_client = GenAIProtectionClient()
# Send the prompt for inspection BEFORE sending it to the LLM
inspection_result = genai_protection_client.check_llm_prompt(
chat_request.prompt,
api_name="chat_service", # Name of the service running the LLM
api_endpoint_name="/chat", # Name of the endpoint serving the LLM interaction
sequence_id=chat_id, # UUID of the chat, if you don't have one, provide `None`
actor="John Doe", # Name of the "actor" interacting with the LLM service.
actor_type="user", # Actor type, one of {"user", "ip", "bot"}
)
if inspection_result.result == InspectionResult.safe:
# Prompt is safe, generate an LLM response
llm_response = get_llm_response(
chat_request.prompt
)
# Call GenAI protection on LLM response (completion)
inspection_result = genai_protection_client.check_llm_response(
prompt=chat_request.prompt,
response=llm_response,
api_name="chat_service",
api_endpoint_name="/chat",
actor="John Doe",
actor_type="user",
request_id=inspection_result.reqId,
sequence_id=chat_id,
)
if inspection_result.result != InspectionResult.safe:
# LLM answer is flagged as unsafe, return a predefined error message to the user
answer_response = "Something went wrong."
else:
# Prompt is flagged as unsafe, return a predefined error message to the user
answer_response = "Something went wrong."
Async use:
You may use the client in async context in two ways:
async def my_async_call_to_gen_ai_protection(prompt: str):
client = GenAIProtectionClient(as_async=True)
return await client.check_llm_prompt_async(
prompt=prompt,
api_name="test",
api_endpoint_name="/test",
actor="John Doe",
actor_type="user"
)
or
async def my_other_async_call_to_gen_ai_protection(prompt: str):
async with GenAIProtectionClient() as client:
return await client.check_llm_prompt_async(
prompt=prompt,
api_name="test",
api_endpoint_name="/test",
actor="John Doe",
actor_type="user"
)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for panoptica_genai_protection-0.1.13.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4283cfc51e78ff25a485c2892e590c2eb1a1e98820fadbdce75354a38e6eb946 |
|
MD5 | 5290648cde9f1219056b356ad538ddac |
|
BLAKE2b-256 | d0eaa34624f7722e930772698115bb955fef6be98d744b010c4c85f8920ad4c8 |
Hashes for panoptica_genai_protection-0.1.13-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2535a7c58d21440b1cb3bc1e40db406a5a82277e4829bd2c46458054aa6a5e62 |
|
MD5 | a21f43d27889b88db0d77441028f304e |
|
BLAKE2b-256 | 648989f3a7705204c16d6f2c9a93ddce82833e992a9cbed43011f18e09f8aee0 |