Skip to main content

Create controlled and compliant AI systems with PredictionGuard and LangChain

Project description

langchain-predictionguard

This page covers how to use the Prediction Guard ecosystem within LangChain. It is broken into two parts: installation and setup, and then references to specific Prediction Guard wrappers.

Installation and Setup

  • Install the PredictionGuard Langchain partner package:
pip install langchain-predictionguard
  • Get a Prediction Guard API key (as described here) and set it as an environment variable (PREDICTIONGUARD_API_KEY)

Prediction Guard Langchain Integrations

API Description Endpoint Docs Import Example Usage
Chat Build Chat Bots Chat from langchain_predictionguard import ChatPredictionGuard ChatPredictionGuard.ipynb
Completions Generate Text Completions from langchain_predictionguard import PredictionGuard PredictionGuard.ipynb
Text Embedding Embed String to Vectores Embeddings from langchain_predictionguard import PredictionGuardEmbeddings PredictionGuardEmbeddings.ipynb

Getting Started

Chat Models

Prediction Guard Chat

See a usage example

from langchain_predictionguard import ChatPredictionGuard

Usage

# If predictionguard_api_key is not passed, default behavior is to use the `PREDICTIONGUARD_API_KEY` environment variable.
chat = ChatPredictionGuard(model="Hermes-3-Llama-3.1-8B")

chat.invoke("Tell me a joke")

Embedding Models

Prediction Guard Embeddings

See a usage example

from langchain_predictionguard import PredictionGuardEmbeddings

Usage

# If predictionguard_api_key is not passed, default behavior is to use the `PREDICTIONGUARD_API_KEY` environment variable.
embeddings = PredictionGuardEmbeddings(model="bridgetower-large-itm-mlm-itc")

text = "This is an embedding example."
output = embeddings.embed_query(text)

LLMs

Prediction Guard LLM

See a usage example

from langchain_predictionguard import PredictionGuard

Usage

# If predictionguard_api_key is not passed, default behavior is to use the `PREDICTIONGUARD_API_KEY` environment variable.
llm = PredictionGuard(model="Hermes-2-Pro-Llama-3-8B")

llm.invoke("Tell me a joke about bears")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_predictionguard-0.2.2.tar.gz (8.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_predictionguard-0.2.2-py3-none-any.whl (11.6 kB view details)

Uploaded Python 3

File details

Details for the file langchain_predictionguard-0.2.2.tar.gz.

File metadata

  • Download URL: langchain_predictionguard-0.2.2.tar.gz
  • Upload date:
  • Size: 8.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.2 CPython/3.13.3 Darwin/24.4.0

File hashes

Hashes for langchain_predictionguard-0.2.2.tar.gz
Algorithm Hash digest
SHA256 1b27130b009d2371844537153f8400a53e40438ba868b91ac095df8a5323a457
MD5 f458956bfe7ed2f91d37df3078afa11b
BLAKE2b-256 22443b819c0d9a087c69d2075e3d6e22f389b6cd0027cc9fcc459a7ce5c2e533

See more details on using hashes here.

File details

Details for the file langchain_predictionguard-0.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_predictionguard-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 4c5d36e7e86b81561970fbeab4e5070412fea823a773f3e522e21f2d964f45d4
MD5 29b951e7eb760d2bbe346af0ecfb2fb8
BLAKE2b-256 a8c93a2c531a56d9cd6bc9f4d14d233ad50857414505e52b241f78ce600c2d81

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page