Skip to main content

llama-index llms ai21 integration

Project description

LlamaIndex LLMs Integration: AI21 Labs

Installation

First, you need to install the package. You can do this using pip:

pip install llama-index-llms-ai21

Usage

Here's a basic example of how to use the AI21 class to generate text completions and handle chat interactions.

Initializing the AI21 Client

You need to initialize the AI21 client with the appropriate model and API key.

from llama_index.llms.ai21 import AI21

api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)

Chat Completions

from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage

api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)

messages = [ChatMessage(role="user", content="What is the meaning of life?")]
response = llm.chat(messages)
print(response.message.content)

Chat Streaming

from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage

api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)

messages = [ChatMessage(role="user", content="What is the meaning of life?")]

for chunk in llm.stream_chat(messages):
    print(chunk.message.content)

Text Completion

from llama_index.llms.ai21 import AI21

api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)

response = llm.complete(prompt="What is the meaning of life?")
print(response.text)

Stream Text Completion

from llama_index.llms.ai21 import AI21

api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)

response = llm.stream_complete(prompt="What is the meaning of life?")

for chunk in response:
    print(response.text)

Other Models Support

You could also use more model types. For example the j2-ultra and j2-mid

These models support chat and complete methods only.

Chat

from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage

api_key = "your_api_key"
llm = AI21(model="j2-chat", api_key=api_key)

messages = [ChatMessage(role="user", content="What is the meaning of life?")]
response = llm.chat(messages)
print(response.message.content)

Complete

from llama_index.llms.ai21 import AI21

api_key = "your_api_key"
llm = AI21(model="j2-ultra", api_key=api_key)

response = llm.complete(prompt="What is the meaning of life?")
print(response.text)

Tokenizer

The type of the tokenizer is determined by the name of the model

from llama_index.llms.ai21 import AI21

api_key = "your_api_key"
llm = AI21(model="jamba-1.5-mini", api_key=api_key)
tokenizer = llm.tokenizer

tokens = tokenizer.encode("What is the meaning of life?")
print(tokens)

text = tokenizer.decode(tokens)
print(text)

Async Support

You can also use the async functionalities

async chat

from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage


async def main():
    api_key = "your_api_key"
    llm = AI21(model="jamba-1.5-mini", api_key=api_key)

    messages = [
        ChatMessage(role="user", content="What is the meaning of life?")
    ]
    response = await llm.achat(messages)
    print(response.message.content)

async stream_chat

from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage


async def main():
    api_key = "your_api_key"
    llm = AI21(model="jamba-1.5-mini", api_key=api_key)

    messages = [
        ChatMessage(role="user", content="What is the meaning of life?")
    ]
    response = await llm.astream_chat(messages)

    async for chunk in response:
        print(chunk.message.content)

Tool Calling

from llama_index.core.agent.workflow import FunctionAgent
from llama_index.llms.ai21 import AI21
from llama_index.core.tools import FunctionTool


def multiply(a: int, b: int) -> int:
    """Multiply two integers and returns the result integer"""
    return a * b


def subtract(a: int, b: int) -> int:
    """Subtract two integers and returns the result integer"""
    return a - b


def divide(a: int, b: int) -> float:
    """Divide two integers and returns the result float"""
    return a - b


def add(a: int, b: int) -> int:
    """Add two integers and returns the result integer"""
    return a + b


multiply_tool = FunctionTool.from_defaults(fn=multiply)
add_tool = FunctionTool.from_defaults(fn=add)
subtract_tool = FunctionTool.from_defaults(fn=subtract)
divide_tool = FunctionTool.from_defaults(fn=divide)

api_key = "your_api_key"

llm = AI21(model="jamba-1.5-mini", api_key=api_key)

agent = FunctionAgent(
    tools=[multiply_tool, add_tool, subtract_tool, divide_tool],
    llm=llm,
)

response = await agent.run(
    "My friend Moses had 10 apples. He ate 5 apples in the morning. Then he found a box with 25 apples."
    "He divided all his apples between his 5 friends. How many apples did each friend get?"
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_ai21-0.7.0.tar.gz (7.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_ai21-0.7.0-py3-none-any.whl (7.7 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_ai21-0.7.0.tar.gz.

File metadata

  • Download URL: llama_index_llms_ai21-0.7.0.tar.gz
  • Upload date:
  • Size: 7.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_ai21-0.7.0.tar.gz
Algorithm Hash digest
SHA256 ab04dd8125aae9ceaa70b4c9b192c7cb61b4bc0f8f237975c22d62a88a0b539a
MD5 4e16e8e01cfa1fe36db700e2d32ae54e
BLAKE2b-256 9abe78060276a5711c97cff187ae0269b35393928a280253359ce83ed1633dc3

See more details on using hashes here.

File details

Details for the file llama_index_llms_ai21-0.7.0-py3-none-any.whl.

File metadata

  • Download URL: llama_index_llms_ai21-0.7.0-py3-none-any.whl
  • Upload date:
  • Size: 7.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_ai21-0.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b678267e5614f10314a01e19de8a5e0cb4abed6b5eb194100bb80cbc5f234de0
MD5 ac2be92acd8c140e2f4e71353cf91386
BLAKE2b-256 a2e2ca520bb4d65ff3a1c70385c46f090261afbc2d9a504b9cea1a3fe9e49bc4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page