Protecting GenAI from Prompt Injection
Project description
Panoptica GenAI Protection SDK
A simple python client SDK for integration with Panoptica GenAI Protection.
GenAI Protection is part of Panoptica, a cloud native application protection platform (CNAPP), and provides protection for LLM-backed systems. Specifically, the GenAI Protection SDK inspects both input and output prompts, flagging those it identifies as likely containing malicious content with a high degree of certainty.
The Python SDK is provided to programmatically integrate your system with our LLM protection software, enabling you to verify the safety level of processing a user requested prompt before actually processing it. Following this evaluation, the application can then determine the appropriate subsequent steps based on your policy.
Installation
pip install panoptica_genai_protection
Usage Example
Working assumptions:
- You have generated a key-pair for GenAI Protection in the Panoptica settings screen
- The access key is set in the GENAI_PROTECTION_ACCESS_KEY environment variable
- The secret key is set in the GENAI_PROTECTION_SECRET_KEY environment variable
- We denote the call to generating the LLM response as get_llm_response()
GenAIProtectionClient provides the check_llm_prompt method to determine the safety level of a given prompt.
Sample Snippet
from panoptica_genai_protection.client import GenAIProtectionClient
from panoptica_genai_protection.gen.models import Result as InspectionResult
# ... Other code in your module ...
# initialize the client
genai_protection_client = GenAIProtectionClient()
# Send the prompt for inspection BEFORE sending it to the LLM
inspection_result = genai_protection_client.check_llm_prompt(
chat_request.prompt,
api_name="chat_service", # Name of the service running the LLM
api_endpoint_name="/chat", # Name of the endpoint serving the LLM interaction
sequence_id=chat_id, # UUID of the chat, if you don't have one, provide `None`
actor="John Doe", # Name of the "actor" interacting with the LLM service.
actor_type="user", # Actor type, one of {"user", "ip", "bot"}
)
if inspection_result.result == InspectionResult.safe:
# Prompt is safe, generate an LLM response
llm_response = get_llm_response(
chat_request.prompt
)
# Call GenAI protection on LLM response (completion)
inspection_result = genai_protection_client.check_llm_response(
prompt=chat_request.prompt,
response=llm_response,
api_name="chat_service",
api_endpoint_name="/chat",
actor="John Doe",
actor_type="user",
req_id=inspection_result.reqId,
sequence_id=chat_id,
)
if inspection_result.result != InspectionResult.safe:
# LLM answer is flagged as unsafe, return a predefined error message to the user
answer_response = "Something went wrong."
else:
# Prompt is flagged as unsafe, return a predefined error message to the user
answer_response = "Something went wrong."
Async use:
You may use the client in async context in two ways:
async def my_async_call_to_gen_ai_protection(prompt: str):
client = GenAIProtectionClient(as_async=True)
return await client.check_llm_prompt_async(
prompt=prompt,
api_name="test",
api_endpoint_name="/test",
actor="John Doe",
actor_type="user"
)
or
async def my_other_async_call_to_gen_ai_protection(prompt: str):
async with GenAIProtectionClient() as client:
return await client.check_llm_prompt_async(
prompt=prompt,
api_name="test",
api_endpoint_name="/test",
actor="John Doe",
actor_type="user"
)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for panoptica_genai_protection-0.1.6.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 422ba1514cee00a40f55cf0b4002e47e61cc36c9acb7e4a92154d6378bb656f8 |
|
MD5 | 760d0110f612a288aa9895f510ca54e0 |
|
BLAKE2b-256 | 64f1dcb77dd86036d05b24306cc12d408540c78f996ef90cde252de77cd809c1 |
Hashes for panoptica_genai_protection-0.1.6-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | fa1c07ba635b6d0f54ff2cc4d0febd05bac0caaaa9c3ae72bc267c32bfb0ff5c |
|
MD5 | 40fdaefb9717c1438adc65a7dfe359e1 |
|
BLAKE2b-256 | 3e3a15a9eededa88f12c4e4504ef8c7d1cbf5389a9c371c821c3e08a9173119d |