Skip to main content

Privacera AI Governance (PAIG) Shield Plugin Library

Project description

Privacera AI Goverance Shield Library

The privacera_shield library allows you to seamlessly integrate Privacera AI Governance into your Langchain projects.

This README provides a step-by-step guide on how to set up and use the library.

Installation

You can install the privacera_shield library using pip:

pip3 install privacera_shield

Initialization

Register an account with Privacera AI Governance at https://privacera.ai. Register your AI application and download the Privacera Shield Configuration file.

# Import privacera_shield
import privacera_shield.client

# Setup Privacera Shield
privacera_shield.client.setup(frameworks=["langchain"])

Setting User Context

Once you have completed the setup of privacera_shield, you can set the user in the context for Privacera Shield to use.

Using context manager

import privacera_shield.client

privacera_shield.client.setup(frameworks=["langchain"])

# Set the current user_name in the context
try:
    with privacera_shield.client.create_shield_context(username="user"):
        response = llm_chain.run(prompt_text)
except privacera_shield.exception.AccessControlException as e:
    # If access is denied, then this exception will be thrown. You can handle it accordingly.
    print(f"AccessControlException: {e}")

Full Example with OpenAI

import os

import privacera_shield.client
import privacera_shield.exception

from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

api_key=os.getenv("OPENAI_API_KEY") # 

# Initialize Privacera Shield
privacera_shield.client.setup(frameworks=["langchain"])

llm = OpenAI(openai_api_key=api_key)
template = """Question: {question}

Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])

# Let's assume the user is "testuser"
user = "testuser"
prompt_text = "Who is first President of USA and where did they live?"
llm_chain = LLMChain(prompt=prompt, llm=llm)
try:
   with privacera_shield.client.create_shield_context(username=user):
      response = llm_chain.run(prompt_text)
      print(f"LLM Response: {response}")
except privacera_shield.exception.AccessControlException as e:
   # If access is denied, then this exception will be thrown. You can handle it accordingly.
   print(f"AccessControlException: {e}")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

privacera_shield-1.0.4-py3-none-any.whl (20.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page