Skip to main content

Python client library for integrating Trust3 AI governance and security features

Project description

Trust3 Client Library

The trust3_client library allows you to seamlessly integrate Privacera AI Governance into your Langchain projects.

This README provides a step-by-step guide on how to set up and use the library.

Installation

You can install the trust3_client library using pip:

pip3 install trust3_client

Initialization

Start your PAIG-Server and obtain your API key from the AI application configuration.

# Import trust3_client
import trust3_client.client

# Setup Trust3 Shield with API key
trust3_client.client.setup(frameworks=["langchain"], application_config_api_key="<API_KEY>")

Usage

Once you have completed the setup of trust3_client, you can set the user in the context for Privacera Shield to use.

Using context manager

import trust3_client.client

trust3_client.client.setup(frameworks=["langchain"], application_config_api_key="<API_KEY>")

# Set the current user_name in the context
try:
    with trust3_client.client.create_shield_context(username="user"):
        response = llm_chain.run(prompt_text)
except trust3_client.exception.AccessControlException as e:
    # If access is denied, then this exception will be thrown. You can handle it accordingly.
    print(f"AccessControlException: {e}")

Completed example using OpenAI

import os

import trust3_client.client
import trust3_client.exception

from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

api_key=os.getenv("OPENAI_API_KEY") # 

# Initialize Trust3 Shield with API key
trust3_client.client.setup(frameworks=["langchain"], application_config_api_key="<API_KEY>")

llm = OpenAI(openai_api_key=api_key)
template = """Question: {question}

Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])

# Let's assume the user is "testuser"
user = "testuser"
prompt_text = "Who is first President of USA and where did they live?"
llm_chain = LLMChain(prompt=prompt, llm=llm)
try:
   with trust3_client.client.create_shield_context(username=user):
      response = llm_chain.run(prompt_text)
      print(f"LLM Response: {response}")
except trust3_client.exception.AccessControlException as e:
   # If access is denied, then this exception will be thrown. You can handle it accordingly.
   print(f"AccessControlException: {e}")

Additional ways of configuring Trust3 Shield

  • Pass the API key as application_config_api_key parameter to the setup function
  • Set the environment variable PAIG_APP_API_KEY to your API key
  • Place the configuration file in privacera folder relative to where you are running the app (legacy method)
  • Set the environment variable PRIVACERA_SHEILD_CONFIG_FILE to the path of the file (legacy method)
  • Set the environment variable PRIVACERA_SHEILD_CONFIG_DIR to a folder that contains the file (legacy method)
  • Pass the file path as application_config_file parameter to the setup function (legacy method)
  • Pass the string contents of the file to the setup function as application_config parameter (legacy method)
  • Pass a dict by converting file contents which is in json format and pass to the setup function as application_config parameter (legacy method)

You can create multiple applications

If your application has multiple AI models to be governed, you can create multiple applications as follows:

app1 = trust3_client.client.setup_app(application_config_api_key="<API_KEY_1>")
app2 = trust3_client.client.setup_app(application_config_api_key="<API_KEY_2>")

You can pass the following parameters to the setup_app function:

  • Pass the API key as application_config_api_key parameter
  • Pass the file path as application_config_file parameter to the setup function (legacy method)
  • Pass the string contents of the file to the setup function as application_config parameter (legacy method)
  • Pass a dict by converting file contents which is in json format and pass to the setup function as application_config parameter (legacy method)

And then you can pass the application object to the context manager as follows:

with trust3_client.client.create_shield_context(application=app1, username="user"):
    response = llm_chain.run(prompt_text)

Note that you still need to invoke the trust3_client.client.setup() method before calling the trust3_client.client.setup_app(...) method.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

trust3_client-1.0.0-py3-none-any.whl (66.0 kB view details)

Uploaded Python 3

File details

Details for the file trust3_client-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: trust3_client-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 66.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.8

File hashes

Hashes for trust3_client-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5947fd101e3333caaf62110c9bd544f876cb537af3b6d26e32bafe6b0c381cf8
MD5 8e0e3867c87c0d3db8799d2ca1d36bdb
BLAKE2b-256 d217399e4394cd6b2bdf130a03da680a4acfc6a49a7f63e8657f23fcca688f19

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page