Create controlled and compliant AI systems with PredictionGuard and LangChain
Project description
langchain-predictionguard
This page covers how to use the Prediction Guard ecosystem within LangChain. It is broken into two parts: installation and setup, and then references to specific Prediction Guard wrappers.
Installation and Setup
- Install the PredictionGuard Langchain partner package:
pip install langchain-predictionguard
- Get a Prediction Guard API key (as described here) and set it as an environment variable (
PREDICTIONGUARD_API_KEY
)
Prediction Guard Langchain Integrations
API | Description | Endpoint Docs | Import | Example Usage |
---|---|---|---|---|
Chat | Build Chat Bots | Chat | from langchain_predictionguard import ChatPredictionGuard |
ChatPredictionGuard.ipynb |
Completions | Generate Text | Completions | from langchain_predictionguard import PredictionGuard |
PredictionGuard.ipynb |
Text Embedding | Embed String to Vectores | Embeddings | from langchain_predictionguard import PredictionGuardEmbeddings |
PredictionGuardEmbeddings.ipynb |
Getting Started
Chat Models
Prediction Guard Chat
See a usage example
from langchain_predictionguard import ChatPredictionGuard
Usage
# If predictionguard_api_key is not passed, default behavior is to use the `PREDICTIONGUARD_API_KEY` environment variable.
chat = ChatPredictionGuard(model="Hermes-3-Llama-3.1-8B")
chat.invoke("Tell me a joke")
Embedding Models
Prediction Guard Embeddings
See a usage example
from langchain_predictionguard import PredictionGuardEmbeddings
Usage
# If predictionguard_api_key is not passed, default behavior is to use the `PREDICTIONGUARD_API_KEY` environment variable.
embeddings = PredictionGuardEmbeddings(model="bridgetower-large-itm-mlm-itc")
text = "This is an embedding example."
output = embeddings.embed_query(text)
LLMs
Prediction Guard LLM
See a usage example
from langchain_predictionguard import PredictionGuard
Usage
# If predictionguard_api_key is not passed, default behavior is to use the `PREDICTIONGUARD_API_KEY` environment variable.
llm = PredictionGuard(model="Hermes-2-Pro-Llama-3-8B")
llm.invoke("Tell me a joke about bears")
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file langchain_predictionguard-0.3.0.tar.gz
.
File metadata
- Download URL: langchain_predictionguard-0.3.0.tar.gz
- Upload date:
- Size: 9.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.3 CPython/3.12.1 Linux/6.11.0-1013-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
1d93a5c5fd15032dec7bd0b1a2e2bd47755ef8b8b8b616d5452197390d5afdde
|
|
MD5 |
dd411004a74580f72d9c0e93f4642a36
|
|
BLAKE2b-256 |
7fe689cb1630297517c0074ce3b1e550b2d97d4d85d558dfd2d66bb37d7d0763
|
File details
Details for the file langchain_predictionguard-0.3.0-py3-none-any.whl
.
File metadata
- Download URL: langchain_predictionguard-0.3.0-py3-none-any.whl
- Upload date:
- Size: 13.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.1.3 CPython/3.12.1 Linux/6.11.0-1013-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 |
88cf14a4a7ce2cf2eb8ed73c8eae344d5b0eda65de6e313f76a7767446f9c108
|
|
MD5 |
51d29a5f2c1ebd1f2ba39297b96b8993
|
|
BLAKE2b-256 |
66e8891791574b0c8d5ee761f1e3b9aa757b721782af9cb3487128ab4a21049a
|