Skip to main content

An integration package connecting Contextual and LangChain

Project description

langchain-contextual

This package contains the LangChain integration with Contextual AI.

Contextual AI provides state-of-the-art RAG components designed specifically for accurate and reliable enterprise AI applications. Founded by the inventors of RAG technology, our specialized components help innovative teams accelerate the development of production-ready AI applications that deliver responses with exceptional accuracy, even when processing millions of pages of documents.

This integration allows you to easily incorporate Contextual AI's Grounded Language Model and Instruction-Following Reranker into your LangChain workflows.

Features

This package provides access to two key components from Contextual AI:

  • Grounded Language Model (GLM): The world's most grounded language model, engineered to minimize hallucinations by prioritizing faithfulness to retrieved knowledge. GLM delivers exceptional factual accuracy with inline attributions, making it ideal for enterprise RAG applications where reliability is critical.

  • Instruction-Following Reranker: The first reranker that follows custom instructions to intelligently prioritize documents based on specific criteria like recency, source, or document type. Our reranker resolves conflicting information challenges in enterprise knowledge bases and outperforms competitors on industry benchmarks.

Installation

pip install -U langchain-contextual

And you should configure credentials by setting the following environment variables:

CONTEXTUAL_AI_API_KEY to your API key for Contextual AI

Using the Chat Models

The ChatContextual class exposes chat models like the Grounded Language Model (GLM) from Contextual.

from langchain_contextual import ChatContextual

llm = ChatContextual(
    model="v1",
    max_new_tokens=1024,
    temperature=0,
    top_p=0.9,
)

# only "human" and "ai" are accepted types of messages
# message types must alternative between "human" and "ai" if more than one message
messages = [
    ("human", "What type of cats are there in the world and what are the types?"),
]

knowledge = [
    "There are 2 types of dogs in the world: good dogs and best dogs.",
    "There are 2 types of cats in the world: good cats and best cats.",
]

llm.invoke(messages, knowledge=knowledge)

Using the Reranker

The ContextualRerank class exposes the reranker model from Contextual.

Example Usage

from langchain_core.documents import Document
from langchain_contextual import ContextualRerank

model = "ctxl-rerank-en-v1-instruct"

compressor = ContextualRerank(model=model)

query = "What is the current enterprise pricing for the RTX 5090 GPU for bulk orders?"

instruction = "Prioritize internal sales documents over market analysis reports. More recent documents should be weighted higher. Enterprise portal content supersedes distributor communications."

document_contents = [
    "Following detailed cost analysis and market research, we have implemented the following changes: AI training clusters will see a 15% uplift in raw compute performance, enterprise support packages are being restructured, and bulk procurement programs (100+ units) for the RTX 5090 Enterprise series will operate on a $2,899 baseline.",
    "Enterprise pricing for the RTX 5090 GPU bulk orders (100+ units) is currently set at $3,100-$3,300 per unit. This pricing for RTX 5090 enterprise bulk orders has been confirmed across all major distribution channels.",
    "RTX 5090 Enterprise GPU requires 450W TDP and 20% cooling overhead."
]

metadata = [
    {
        "Date": "January 15, 2025",
        "Source": "NVIDIA Enterprise Sales Portal",
        "Classification": "Internal Use Only"
    },
    {
        "Date": "11/30/2023",
        "Source": "TechAnalytics Research Group"
    },
    {
        "Date": "January 25, 2025",
        "Source": "NVIDIA Enterprise Sales Portal",
        "Classification": "Internal Use Only"
    }
]

documents = [
    Document(page_content=content, metadata=metadata[i])
    for i, content in enumerate(document_contents)
]
reranked_documents = compressor.compress_documents(
    query=query,
    instruction=instruction,
    documents=documents,
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_contextual-0.4.5.tar.gz (6.4 kB view details)

Uploaded Source

Built Distribution

langchain_contextual-0.4.5-py3-none-any.whl (8.2 kB view details)

Uploaded Python 3

File details

Details for the file langchain_contextual-0.4.5.tar.gz.

File metadata

  • Download URL: langchain_contextual-0.4.5.tar.gz
  • Upload date:
  • Size: 6.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for langchain_contextual-0.4.5.tar.gz
Algorithm Hash digest
SHA256 18234c2cb3ff6d65dd8cf9fe1d277fc946b86a5fd5662192d791323ae68e162a
MD5 5bad69808abbac9da1c33f5e2e5a6782
BLAKE2b-256 3c087d9410edd34a779b3eb31b1e9b956cd3136b85183d9919b3ca9f4a1a83d2

See more details on using hashes here.

File details

Details for the file langchain_contextual-0.4.5-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_contextual-0.4.5-py3-none-any.whl
Algorithm Hash digest
SHA256 ac3fe5c575b859874386b15753e66339d5d009e9f305b09d1d562b9cbb8b3355
MD5 2577f6e27e21708724bb0ebc9a62c5f6
BLAKE2b-256 9f0eb99b832fdfe4cc6e104c249f3834b96e6953cc6da800df76c6bb7e08e092

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page