Protecting GenAI from Prompt Injection
Project description
Panoptica GenAI Protection SDK
A simple python client SDK for integration with Panoptica GenAI Protection.
GenAI Protection is part of Panoptica, a cloud native application protection platform (CNAPP), and provides protection for LLM-backed systems. Specifically, the GenAI Protection SDK inspects both input and output prompts, flagging those it identifies as likely containing malicious content with a high degree of certainty.
The Python SDK is provided to programmatically integrate your system with our LLM protection software, enabling you to verify the safety level of processing a user requested prompt before actually processing it. Following this evaluation, the application can then determine the appropriate subsequent steps based on your policy.
Installation
pip install panoptica_genai_protection
Usage Example
Working assumptions:
- You have generated a key-pair for GenAI Protection in the Panoptica settings screen
- The access key is set in the GENAI_PROTECTION_ACCESS_KEY environment variable
- The secret key is set in the GENAI_PROTECTION_SECRET_KEY environment variable
- We denote the call to generating the LLM response as get_llm_response()
GenAIProtectionClient provides the check_llm_prompt method to determine the safety level of a given prompt.
Sample Snippet
from panoptica_genai_protection.client import GenAIProtectionClient
from panoptica_genai_protection.gen.models import Result as InspectionResult
# ... Other code in your module ...
# initialize the client
genai_protection_client = GenAIProtectionClient()
# Send the prompt for inspection BEFORE sending it to the LLM
inspection_result = genai_protection_client.check_llm_prompt(
chat_request.prompt,
api_name="chat_service", # Name of the service running the LLM
api_endpoint_name="/chat", # Name of the endpoint serving the LLM interaction
sequence_id=chat_id, # UUID of the chat, if you don't have one, provide `None`
actor="John Doe", # Name of the "actor" interacting with the LLM service.
actor_type="user", # Actor type, one of {"user", "ip", "bot"}
)
if inspection_result.result == InspectionResult.safe:
# Prompt is safe, generate an LLM response
llm_response = get_llm_response(
chat_request.prompt
)
# Call GenAI protection on LLM response (completion)
inspection_result = genai_protection_client.check_llm_response(
prompt=chat_request.prompt,
response=llm_response,
api_name="chat_service",
api_endpoint_name="/chat",
actor="John Doe",
actor_type="user",
request_id=inspection_result.reqId,
sequence_id=chat_id,
)
if inspection_result.result != InspectionResult.safe:
# LLM answer is flagged as unsafe, return a predefined error message to the user
answer_response = "Something went wrong."
else:
# Prompt is flagged as unsafe, return a predefined error message to the user
answer_response = "Something went wrong."
Async use:
You may use the client in async context in two ways:
async def my_async_call_to_gen_ai_protection(prompt: str):
client = GenAIProtectionClient(as_async=True)
return await client.check_llm_prompt_async(
prompt=prompt,
api_name="test",
api_endpoint_name="/test",
actor="John Doe",
actor_type="user"
)
or
async def my_other_async_call_to_gen_ai_protection(prompt: str):
async with GenAIProtectionClient() as client:
return await client.check_llm_prompt_async(
prompt=prompt,
api_name="test",
api_endpoint_name="/test",
actor="John Doe",
actor_type="user"
)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for panoptica_genai_protection-0.1.12.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | afc5dd87a2764c4256113ce88ff9df2822d8d3e699c6eaf4e1459739b3999f7d |
|
MD5 | c28a3cbcad0cf8c68958fae11a0e6e35 |
|
BLAKE2b-256 | 3319f5ac072ab9ec75a64d8421b48f359bf314977b47f040565154165c97b4f2 |
Hashes for panoptica_genai_protection-0.1.12-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c07f17c86c31c8ce38efd2895dd9188465f31f1de5114fd2b45302ad6ffbb487 |
|
MD5 | 938f8dbe854d231b9d91cd787fb1b1d4 |
|
BLAKE2b-256 | 9ddfcd933ad4f787bfe46365d9cf4a5ba9db2225744db685335f06741ee608f3 |