Privacera AI Governance (PAIG) Shield Plugin Library
Project description
Privacera AI Goverance Shield Library
The privacera_shield
library allows you to seamlessly integrate Privacera AI Governance into your Langchain projects.
This README provides a step-by-step guide on how to set up and use the library.
Installation
You can install the privacera_shield
library using pip
:
pip3 install privacera_shield
Initialization
Register an account with Privacera AI Governance at https://privacera.ai. Register your AI application and download the Privacera Shield Configuration file.
# Import privacera_shield
import privacera_shield.client
# Setup Privacera Shield
privacera_shield.client.setup(frameworks=["langchain"])
Setting User Context
Once you have completed the setup of privacera_shield, you can set the user in the context for Privacera Shield to use.
Using context manager
import privacera_shield.client
privacera_shield.client.setup(frameworks=["langchain"])
# Set the current user_name in the context
try:
with privacera_shield.client.create_shield_context(username="user"):
response = llm_chain.run(prompt_text)
except privacera_shield.exception.AccessControlException as e:
# If access is denied, then this exception will be thrown. You can handle it accordingly.
print(f"AccessControlException: {e}")
Full Example with OpenAI
import os
import privacera_shield.client
import privacera_shield.exception
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
api_key=os.getenv("OPENAI_API_KEY") #
# Initialize Privacera Shield
privacera_shield.client.setup(frameworks=["langchain"])
llm = OpenAI(openai_api_key=api_key)
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
# Let's assume the user is "testuser"
user = "testuser"
prompt_text = "Who is first President of USA and where did they live?"
llm_chain = LLMChain(prompt=prompt, llm=llm)
try:
with privacera_shield.client.create_shield_context(username=user):
response = llm_chain.run(prompt_text)
print(f"LLM Response: {response}")
except privacera_shield.exception.AccessControlException as e:
# If access is denied, then this exception will be thrown. You can handle it accordingly.
print(f"AccessControlException: {e}")
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
Close
Hashes for privacera_shield-1.0.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8fe47f8bdaeb46402fe14163dbaec734d9c2b11661d8c30147bb7e2d69955482 |
|
MD5 | f4c8ae623cd7ffd24279a699e5a85733 |
|
BLAKE2b-256 | 4411f0065a8929af222a7abad0f6f92de1939d7039b4c3d05d78a2782557ef2e |