llama-index llms ai21 integration
Project description
LlamaIndex LLMs Integration: AI21 Labs
Installation
First, you need to install the package. You can do this using pip:
pip install llama-index-llms-ai21
Usage
Here's a basic example of how to use the AI21 class to generate text completions and handle chat interactions.
Initializing the AI21 Client
You need to initialize the AI21 client with the appropriate model and API key.
from llama_index.llms.ai21 import AI21
api_key = "your_api_key"
llm = AI21(model="jamba-instruct", api_key=api_key)
Chat Completions
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage
api_key = "your_api_key"
llm = AI21(model="jamba-instruct", api_key=api_key)
messages = [ChatMessage(role="user", content="What is the meaning of life?")]
response = llm.chat(messages)
print(response.message.content)
Chat Streaming
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage
api_key = "your_api_key"
llm = AI21(model="jamba-instruct", api_key=api_key)
messages = [ChatMessage(role="user", content="What is the meaning of life?")]
for chunk in llm.stream_chat(messages):
print(chunk.message.content)
Text Completion
from llama_index.llms.ai21 import AI21
api_key = "your_api_key"
llm = AI21(model="jamba-instruct", api_key=api_key)
response = llm.complete(prompt="What is the meaning of life?")
print(response.text)
Stream Text Completion
from llama_index.llms.ai21 import AI21
api_key = "your_api_key"
llm = AI21(model="jamba-instruct", api_key=api_key)
response = llm.stream_complete(prompt="What is the meaning of life?")
for chunk in response:
print(response.text)
Other Models Support
You could also use more model types. For example the j2-ultra
and j2-mid
These models support chat
and complete
methods only.
Chat
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage
api_key = "your_api_key"
llm = AI21(model="j2-chat", api_key=api_key)
messages = [ChatMessage(role="user", content="What is the meaning of life?")]
response = llm.chat(messages)
print(response.message.content)
Complete
from llama_index.llms.ai21 import AI21
api_key = "your_api_key"
llm = AI21(model="j2-ultra", api_key=api_key)
response = llm.complete(prompt="What is the meaning of life?")
print(response.text)
Tokenizer
The type of the tokenizer is determined by the name of the model
from llama_index.llms.ai21 import AI21
api_key = "your_api_key"
llm = AI21(model="jamba-instruct", api_key=api_key)
tokenizer = llm.tokenizer
tokens = tokenizer.encode("What is the meaning of life?")
print(tokens)
text = tokenizer.decode(tokens)
print(text)
Async Support
You can also use the async functionalities
async chat
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage
async def main():
api_key = "your_api_key"
llm = AI21(model="jamba-instruct", api_key=api_key)
messages = [
ChatMessage(role="user", content="What is the meaning of life?")
]
response = await llm.achat(messages)
print(response.message.content)
async stream_chat
from llama_index.llms.ai21 import AI21
from llama_index.core.base.llms.types import ChatMessage
async def main():
api_key = "your_api_key"
llm = AI21(model="jamba-instruct", api_key=api_key)
messages = [
ChatMessage(role="user", content="What is the meaning of life?")
]
response = await llm.astream_chat(messages)
async for chunk in response:
print(chunk.message.content)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for llama_index_llms_ai21-0.3.2.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4a3e06bfc08747f127bf0ed403d964ba6b259e02913e3755b6bf7588800bf799 |
|
MD5 | 50ded24d200da055df74fc912d1d40a7 |
|
BLAKE2b-256 | aeddff22a235c27679f3d8dd8f33d6916564c6d6f97f7fc3adff999ffd340e9f |
Close
Hashes for llama_index_llms_ai21-0.3.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9d34c7303adbd3181af7d0fc649a86a0e4800d00c935e5c7ba9c6f9df37f55e5 |
|
MD5 | 3604407eebd71eaaa874b82ec9aa9fd1 |
|
BLAKE2b-256 | 4d2d7ec20a74d91d6d06e0266283eab8a0b2d17efc6b18007face921a69e26c1 |