Skip to main content

Create controlled and compliant AI systems with PredictionGuard and LangChain

Project description

langchain-predictionguard

This page covers how to use the Prediction Guard ecosystem within LangChain. It is broken into two parts: installation and setup, and then references to specific Prediction Guard wrappers.

Installation and Setup

  • Install the PredictionGuard Langchain partner package:
pip install langchain-predictionguard
  • Get a Prediction Guard API key (as described here) and set it as an environment variable (PREDICTIONGUARD_API_KEY)

Prediction Guard Langchain Integrations

API Description Endpoint Docs Import Example Usage
Chat Build Chat Bots Chat from langchain_predictionguard import ChatPredictionGuard ChatPredictionGuard.ipynb
Completions Generate Text Completions from langchain_predictionguard import PredictionGuard PredictionGuard.ipynb
Text Embedding Embed String to Vectores Embeddings from langchain_predictionguard import PredictionGuardEmbeddings PredictionGuardEmbeddings.ipynb

Getting Started

Chat Models

Prediction Guard Chat

See a usage example

from langchain_predictionguard import ChatPredictionGuard

Usage

# If predictionguard_api_key is not passed, default behavior is to use the `PREDICTIONGUARD_API_KEY` environment variable.
chat = ChatPredictionGuard(model="Hermes-3-Llama-3.1-8B")

chat.invoke("Tell me a joke")

Embedding Models

Prediction Guard Embeddings

See a usage example

from langchain_predictionguard import PredictionGuardEmbeddings

Usage

# If predictionguard_api_key is not passed, default behavior is to use the `PREDICTIONGUARD_API_KEY` environment variable.
embeddings = PredictionGuardEmbeddings(model="bridgetower-large-itm-mlm-itc")

text = "This is an embedding example."
output = embeddings.embed_query(text)

LLMs

Prediction Guard LLM

See a usage example

from langchain_predictionguard import PredictionGuard

Usage

# If predictionguard_api_key is not passed, default behavior is to use the `PREDICTIONGUARD_API_KEY` environment variable.
llm = PredictionGuard(model="Hermes-2-Pro-Llama-3-8B")

llm.invoke("Tell me a joke about bears")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_predictionguard-0.3.0.tar.gz (9.0 kB view details)

Uploaded Source

Built Distribution

langchain_predictionguard-0.3.0-py3-none-any.whl (13.1 kB view details)

Uploaded Python 3

File details

Details for the file langchain_predictionguard-0.3.0.tar.gz.

File metadata

  • Download URL: langchain_predictionguard-0.3.0.tar.gz
  • Upload date:
  • Size: 9.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.12.1 Linux/6.11.0-1013-azure

File hashes

Hashes for langchain_predictionguard-0.3.0.tar.gz
Algorithm Hash digest
SHA256 1d93a5c5fd15032dec7bd0b1a2e2bd47755ef8b8b8b616d5452197390d5afdde
MD5 dd411004a74580f72d9c0e93f4642a36
BLAKE2b-256 7fe689cb1630297517c0074ce3b1e550b2d97d4d85d558dfd2d66bb37d7d0763

See more details on using hashes here.

File details

Details for the file langchain_predictionguard-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_predictionguard-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 88cf14a4a7ce2cf2eb8ed73c8eae344d5b0eda65de6e313f76a7767446f9c108
MD5 51d29a5f2c1ebd1f2ba39297b96b8993
BLAKE2b-256 66e8891791574b0c8d5ee761f1e3b9aa757b721782af9cb3487128ab4a21049a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page