Protecting GenAI from Prompt Injection
Project description
Panoptica GenAI Protection SDK
A simple python client SDK for integration with Panoptica GenAI Protection.
GenAI Protection is part of Panoptica, a cloud native application protection platform (CNAPP), and provides protection for LLM-backed systems. Specifically, the GenAI Protection SDK inspects both input and output prompts, flagging those it identifies as likely containing malicious content with a high degree of certainty.
The Python SDK is provided to programmatically integrate your system with our LLM protection software, enabling you to verify the safety level of processing a user requested prompt before actually processing it. Following this evaluation, the application can then determine the appropriate subsequent steps based on your policy.
Installation
pip install panoptica_genai_protection
Usage Example
Working assumptions:
- You have generated a key-pair for GenAI Protection in the Panoptica settings screen
- The access key is set in the GENAI_PROTECTION_ACCESS_KEY environment variable
- The secret key is set in the GENAI_PROTECTION_SECRET_KEY environment variable
- We denote the call to generating the LLM response as get_llm_response()
GenAIProtectionClient provides the check_llm_prompt method to determine the safety level of a given prompt.
Sample Snippet
from panoptica_genai_protection.client import GenAIProtectionClient
from panoptica_genai_protection.models import Result as InspectionResult
# ... Other code in your module ...
# initialize the client
genai_protection_client = GenAIProtectionClient()
# Send the prompt for inspection BEFORE sending it to the LLM
inspection_result = genai_protection_client.check_llm_prompt(
chat_request.prompt,
api_name="chat_service", # Name of the service running the LLM
api_endpoint_name="/chat", # Name of the endpoint serving the LLM interaction
sequence_id=chat_id, # UUID of the chat, if you don't have one, provide `None`
actor="John Doe", # Name of the "actor" interacting with the LLM service.
actor_type="user", # Actor type, one of {"user", "ip", "bot"}
)
if inspection_result.result == InspectionResult.safe:
# Prompt is safe, generate an LLM response
llm_response = get_llm_response(
chat_request.prompt
)
# Call GenAI protection on LLM response (completion)
inspection_result = genai_protection_client.check_llm_response(
prompt=chat_request.prompt,
response=llm_response,
api_name="chat_service",
api_endpoint_name="/chat",
actor="John Doe",
actor_type="user",
req_id=inspection_result.reqId,
sequence_id=chat_id,
)
if inspection_result.result != InspectionResult.safe:
# LLM answer is flagged as unsafe, return a predefined error message to the user
answer_response = "Something went wrong."
else:
# Prompt is flagged as unsafe, return a predefined error message to the user
answer_response = "Something went wrong."
Async use:
You may use the client in async context in two ways:
async def my_async_call_to_gen_ai_protection(prompt: str):
client = GenAIProtectionClient(as_async=True)
return await client.check_llm_prompt_async(
prompt=prompt,
api_name="test",
api_endpoint_name="/test",
actor="John Doe",
actor_type="user"
)
or
async def my_other_async_call_to_gen_ai_protection(prompt: str):
async with GenAIProtectionClient() as client:
return await client.check_llm_prompt_async(
prompt=prompt,
api_name="test",
api_endpoint_name="/test",
actor="John Doe",
actor_type="user"
)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for panoptica_genai_protection-0.1.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1e0fc93afabc542e41efcace90aaf2d6acd17d2163f1292ca283f637e5e6eb28 |
|
MD5 | da66ea167d54a4a4476908ee849623af |
|
BLAKE2b-256 | 79e55cfcd92cdf68e50170aad0479fd9a86c3f1bffd725d5d27f7a728a88b1e8 |
Hashes for panoptica_genai_protection-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6ecd82d19143cd6b4a01c4a7a0d12ce0f3b54691d1f0acf84ace6281a3e05380 |
|
MD5 | 64c9b2f3ccf5acb3d3b620f0b5f52569 |
|
BLAKE2b-256 | a71f32b66a7c687a422d570ce45fd4b48c991f1a73950659119039a50ec2c395 |