Skip to main content

An integration package connecting AI21 and LangChain

Project description

langchain-ai21

This package contains the LangChain integrations for AI21 models and tools.

Installation and Setup

  • Install the AI21 partner package
pip install langchain-ai21
  • Get an AI21 api key and set it as an environment variable (AI21_API_KEY)

Chat Models

This package contains the ChatAI21 class, which is the recommended way to interface with AI21 chat models, including Jamba-Instruct and any Jurassic chat models.

To use, install the requirements and configure your environment.

export AI21_API_KEY=your-api-key

Then initialize

from langchain_core.messages import HumanMessage
from langchain_ai21.chat_models import ChatAI21

chat = ChatAI21(model="jamba-instruct")
messages = [HumanMessage(content="Hello from AI21")]
chat.invoke(messages)

For a list of the supported models, see this page

Streaming in Chat

Streaming is supported by the latest models. To use streaming, set the streaming parameter to True when initializing the model.

from langchain_core.messages import HumanMessage
from langchain_ai21.chat_models import ChatAI21

chat = ChatAI21(model="jamba-instruct", streaming=True)
messages = [HumanMessage(content="Hello from AI21")]

response = chat.invoke(messages)

or use the stream method directly

from langchain_core.messages import HumanMessage
from langchain_ai21.chat_models import ChatAI21

chat = ChatAI21(model="jamba-instruct")
messages = [HumanMessage(content="Hello from AI21")]

for chunk in chat.stream(messages):
    print(chunk)

LLMs

You can use AI21's Jurassic generative AI models as LangChain LLMs. To use the newer Jamba model, use the ChatAI21 chat model, which supports single-turn instruction/question answering capabilities.

from langchain_core.prompts import PromptTemplate
from langchain_ai21 import AI21LLM

llm = AI21LLM(model="j2-ultra")

template = """Question: {question}

Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)

chain = prompt | llm

question = "Which scientist discovered relativity?"
print(chain.invoke({"question": question}))

Embeddings

You can use AI21's embeddings model as shown here:

Query

from langchain_ai21 import AI21Embeddings

embeddings = AI21Embeddings()
embeddings.embed_query("Hello! This is some query")

Document

from langchain_ai21 import AI21Embeddings

embeddings = AI21Embeddings()
embeddings.embed_documents(["Hello! This is document 1", "And this is document 2!"])

Task-Specific Models

Contextual Answers

You can use AI21's contextual answers model to parse given text and answer a question based entirely on the provided information.

This means that if the answer to your question is not in the document, the model will indicate it (instead of providing a false answer)

from langchain_ai21 import AI21ContextualAnswers

tsm = AI21ContextualAnswers()

response = tsm.invoke(input={"context": "Lots of information here", "question": "Your question about the context"})

You can also use it with chains and output parsers and vector DBs:

from langchain_ai21 import AI21ContextualAnswers
from langchain_core.output_parsers import StrOutputParser

tsm = AI21ContextualAnswers()
chain = tsm | StrOutputParser()

response = chain.invoke(
    {"context": "Your context", "question": "Your question"},
)

Text Splitters

Semantic Text Splitter

You can use AI21's semantic text segmentation model to split a text into segments by topic. Text is split at each point where the topic changes.

For a list for examples, see this page.

from langchain_ai21 import AI21SemanticTextSplitter

splitter = AI21SemanticTextSplitter()
response = splitter.split_text("Your text")

Tool calls

Function calling

AI21 models incorporate the Function Calling feature to support custom user functions. The models generate structured data that includes the function name and proposed arguments. This data empowers applications to call external APIs and incorporate the resulting information into subsequent model prompts, enriching responses with real-time data and context. Through function calling, users can access and utilize various services like transportation APIs and financial data providers to obtain more accurate and relevant answers. Here is an example of how to use function calling with AI21 models in LangChain:

import os
from getpass import getpass
from langchain_core.messages import HumanMessage, ToolMessage, SystemMessage
from langchain_core.tools import tool
from langchain_ai21.chat_models import ChatAI21
from langchain_core.utils.function_calling import convert_to_openai_tool

os.environ["AI21_API_KEY"] = getpass()

@tool
def get_weather(location: str, date: str) -> str:
    """“Provide the weather for the specified location on the given date.”"""
    if location == "New York" and date == "2024-12-05":
        return "25 celsius"
    elif location == "New York" and date == "2024-12-06":
        return "27 celsius"
    elif location == "London" and date == "2024-12-05":
        return "22 celsius"
    return "32 celsius"

llm = ChatAI21(model="jamba-1.5-mini")

llm_with_tools = llm.bind_tools([convert_to_openai_tool(get_weather)])

chat_messages = [SystemMessage(content="You are a helpful assistant. You can use the provided tools "
                                       "to assist with various tasks and provide accurate information")]

human_messages = [
    HumanMessage(content="What is the forecast for the weather in New York on December 5, 2024?"),
    HumanMessage(content="And what about the 2024-12-06?"),
    HumanMessage(content="OK, thank you."),
    HumanMessage(content="What is the expected weather in London on December 5, 2024?")]


for human_message in human_messages:
    print(f"User: {human_message.content}")
    chat_messages.append(human_message)
    response = llm_with_tools.invoke(chat_messages)
    chat_messages.append(response)
    if response.tool_calls:
        tool_call = response.tool_calls[0]
        if tool_call["name"] == "get_weather":
            weather = get_weather.invoke(
                {"location": tool_call["args"]["location"], "date": tool_call["args"]["date"]})
            chat_messages.append(ToolMessage(content=weather, tool_call_id=tool_call["id"]))
            llm_answer = llm_with_tools.invoke(chat_messages)
            print(f"Assistant: {llm_answer.content}")
    else:
        print(f"Assistant: {response.content}")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_ai21-0.1.8rc0.tar.gz (15.3 kB view details)

Uploaded Source

Built Distribution

langchain_ai21-0.1.8rc0-py3-none-any.whl (17.8 kB view details)

Uploaded Python 3

File details

Details for the file langchain_ai21-0.1.8rc0.tar.gz.

File metadata

  • Download URL: langchain_ai21-0.1.8rc0.tar.gz
  • Upload date:
  • Size: 15.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.5

File hashes

Hashes for langchain_ai21-0.1.8rc0.tar.gz
Algorithm Hash digest
SHA256 92926eafd3b69764a06bd766936417b2b2b5ce2c01d39fbe76c6957523015a8e
MD5 bf90d635c51ead276b9b98564ce8b0fb
BLAKE2b-256 38703f1997550ccc2e13417168795caa444000ca4f0ea3a6901248803d8105c4

See more details on using hashes here.

File details

Details for the file langchain_ai21-0.1.8rc0-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_ai21-0.1.8rc0-py3-none-any.whl
Algorithm Hash digest
SHA256 2cb06173646b43ac3b9d47b65ebff855c843a5bedbfa873875787f52756a4df6
MD5 e672e3ac0778f51268d6e5354d08b793
BLAKE2b-256 a4da827a9cca87b7e37ee51092877b659f619964036b9aa3f0212f45802f444d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page