Skip to main content

Protecting GenAI from Prompt Injection

Project description

Panoptica GenAI Protection SDK

A simple python client SDK for integration with Panoptica GenAI Protection.


GenAI Protection is part of Panoptica, a cloud native application protection platform (CNAPP), and provides protection for LLM-backed systems. Specifically, the GenAI Protection SDK inspects both input and output prompts, flagging those it identifies as likely containing malicious content with a high degree of certainty.

The Python SDK is provided to programmatically integrate your system with our LLM protection software, enabling you to verify the safety level of processing a user requested prompt before actually processing it. Following this evaluation, the application can then determine the appropriate subsequent steps based on your policy.

Installation

pip install panoptica_genai_protection

Usage Example

Working assumptions:

  • You have generated a key-pair for GenAI Protection in the Panoptica settings screen
    • The access key is set in the GENAI_PROTECTION_ACCESS_KEY environment variable
    • The secret key is set in the GENAI_PROTECTION_SECRET_KEY environment variable
  • We denote the call to generating the LLM response as get_llm_response()

GenAIProtectionClient provides the check_llm_prompt method to determine the safety level of a given prompt.

Sample Snippet

from panoptica_genai_protection.client import GenAIProtectionClient
from panoptica_genai_protection.gen.models import Result as InspectionResult

# ... Other code in your module ...

# initialize the client
genai_protection_client = GenAIProtectionClient()

# Send the prompt for inspection BEFORE sending it to the LLM
inspection_result = genai_protection_client.check_llm_prompt(
  chat_request.prompt,
  api_name="chat_service",  # Name of the service running the LLM
  api_endpoint_name="/chat",  # Name of the endpoint serving the LLM interaction
  sequence_id=chat_id,  # UUID of the chat, if you don't have one, provide `None`
  actor="John Doe",  # Name of the "actor" interacting with the LLM service.
  actor_type="user",  # Actor type, one of {"user", "ip", "bot"}
)

if inspection_result.result == InspectionResult.safe:
  # Prompt is safe, generate an LLM response
  llm_response = get_llm_response(
    chat_request.prompt
  )

  # Call GenAI protection on LLM response (completion)
  inspection_result = genai_protection_client.check_llm_response(
    prompt=chat_request.prompt,
    response=llm_response,
    api_name="chat_service",
    api_endpoint_name="/chat",
    actor="John Doe",
    actor_type="user",
    request_id=inspection_result.reqId,
    sequence_id=chat_id,
  )
  if inspection_result.result != InspectionResult.safe:
    # LLM answer is flagged as unsafe, return a predefined error message to the user
    answer_response = "Something went wrong."
else:
  # Prompt is flagged as unsafe, return a predefined error message to the user
  answer_response = "Something went wrong."

Async use:

You may use the client in async context in two ways:

async def my_async_call_to_gen_ai_protection(prompt: str):
    client = GenAIProtectionClient(as_async=True)
    return await client.check_llm_prompt_async(
        prompt=prompt,
        api_name="test",
        api_endpoint_name="/test",
        actor="John Doe",
        actor_type="user"
    )

or

async def my_other_async_call_to_gen_ai_protection(prompt: str):
    async with GenAIProtectionClient() as client:
      return await client.check_llm_prompt_async(
          prompt=prompt,
          api_name="test",
          api_endpoint_name="/test",
          actor="John Doe",
          actor_type="user"
      )

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

panoptica_genai_protection-0.1.18.tar.gz (4.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

panoptica_genai_protection-0.1.18-py3-none-any.whl (8.6 kB view details)

Uploaded Python 3

File details

Details for the file panoptica_genai_protection-0.1.18.tar.gz.

File metadata

File hashes

Hashes for panoptica_genai_protection-0.1.18.tar.gz
Algorithm Hash digest
SHA256 9f7e5ba12f47d2790691bb13a86400d64c786512ff4f2cd76a49443bab5ad032
MD5 6aa05e46a7d88d547b23ee5cc4d9c61a
BLAKE2b-256 faca390903110b254cfad29b218786025452eb0639a43a8be6353b3124cfc6e6

See more details on using hashes here.

File details

Details for the file panoptica_genai_protection-0.1.18-py3-none-any.whl.

File metadata

File hashes

Hashes for panoptica_genai_protection-0.1.18-py3-none-any.whl
Algorithm Hash digest
SHA256 6c0f9237cf10d6d4c9d4524cc62768531e997a6e4eeb8996b8ec3d07a239b202
MD5 0a6d5c04c23081acfdb5edab1190dd25
BLAKE2b-256 6dc70a8270b536532fd2c2ca4f922e2cb66f319e3f318e99ee3b11104afa3b07

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page