Skip to main content

llama-index llms bedrock converse integration

Project description

LlamaIndex Llms Integration: Bedrock Converse

Installation

%pip install llama-index-llms-bedrock-converse
!pip install llama-index

Usage

from llama_index.llms.bedrock_converse import BedrockConverse

# Set your AWS profile name
profile_name = "Your aws profile name"

# Simple completion call
resp = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
).complete("Paul Graham is ")
print(resp)

Call chat with a list of messages

from llama_index.core.llms import ChatMessage
from llama_index.llms.bedrock_converse import BedrockConverse

messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality"
    ),
    ChatMessage(role="user", content="Tell me a story"),
]

resp = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
).chat(messages)
print(resp)

Streaming

# Using stream_complete endpoint
from llama_index.llms.bedrock_converse import BedrockConverse

llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
)
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
    print(r.delta, end="")

# Using stream_chat endpoint
from llama_index.llms.bedrock_converse import BedrockConverse

llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
)
messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality"
    ),
    ChatMessage(role="user", content="Tell me a story"),
]
resp = llm.stream_chat(messages)
for r in resp:
    print(r.delta, end="")

Configure Model

from llama_index.llms.bedrock_converse import BedrockConverse

llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
)
resp = llm.complete("Paul Graham is ")
print(resp)

Connect to Bedrock with Access Keys

from llama_index.llms.bedrock_converse import BedrockConverse

llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    aws_access_key_id="AWS Access Key ID to use",
    aws_secret_access_key="AWS Secret Access Key to use",
    aws_session_token="AWS Session Token to use",
    region_name="AWS Region to use, eg. us-east-1",
)

resp = llm.complete("Paul Graham is ")
print(resp)

Use an Application Inference Profile

AWS Bedrock supports Application Inference Profiles which are a sort of provisioned proxy to Bedrock LLMs.

Since these profile ARNs are account-specific, they must be handled specially in BedrockConverse.

When an application inference profile is created as an AWS resource, it references an existing Bedrock foundation model or a cross-region inference profile. The referenced model must be provided to the BedrockConverse initializer as the model argument, and the ARN of the application inference profile must be provided as the application_inference_profile_arn argument.

Important: BedrockConverse does not validate that the model argument in fact matches the underlying model referenced by the application inference profile provided. The caller is responsible for making sure they match. Behavior when they do not match is undefined.

# Assumes the existence of a provisioned application inference profile
# that references a foundation model or cross-region inference profile.

from llama_index.llms.bedrock_converse import BedrockConverse


# Instantiate the BedrockConverse model
# with the model and application inference profile
# Make sure the model is the one that the
# application inference profile refers to in AWS
llm = BedrockConverse(
    model="us.anthropic.claude-3-5-sonnet-20240620-v1:0",  # this is the referenced model/profile
    application_inference_profile_arn="arn:aws:bedrock:us-east-1:012345678901:application-inference-profile/fake-profile-name",
)

Function Calling

# Claude, Command, and Mistral Large models support native function calling through AWS Bedrock Converse.
# There is seamless integration with LlamaIndex tools through the predict_and_call function on the LLM.

from llama_index.llms.bedrock_converse import BedrockConverse
from llama_index.core.tools import FunctionTool


# Define some functions
def multiply(a: int, b: int) -> int:
    """Multiply two integers and return the result"""
    return a * b


def mystery(a: int, b: int) -> int:
    """Mystery function on two integers."""
    return a * b + a + b


# Create tools from functions
mystery_tool = FunctionTool.from_defaults(fn=mystery)
multiply_tool = FunctionTool.from_defaults(fn=multiply)

# Instantiate the BedrockConverse model
llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    profile_name=profile_name,
)

# Use function tools with the LLM
response = llm.predict_and_call(
    [mystery_tool, multiply_tool],
    user_msg="What happens if I run the mystery function on 5 and 7",
)
print(str(response))

response = llm.predict_and_call(
    [mystery_tool, multiply_tool],
    user_msg=(
        """What happens if I run the mystery function on the following pairs of numbers?
        Generate a separate result for each row:
        - 1 and 2
        - 8 and 4
        - 100 and 20

        NOTE: you need to run the mystery function for all of the pairs above at the same time"""
    ),
    allow_parallel_tool_calls=True,
)
print(str(response))

for s in response.sources:
    print(f"Name: {s.tool_name}, Input: {s.raw_input}, Output: {str(s)}")

Async usage

from llama_index.llms.bedrock_converse import BedrockConverse

llm = BedrockConverse(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    aws_access_key_id="AWS Access Key ID to use",
    aws_secret_access_key="AWS Secret Access Key to use",
    aws_session_token="AWS Session Token to use",
    region_name="AWS Region to use, eg. us-east-1",
)

# Use async complete
resp = await llm.acomplete("Paul Graham is ")
print(resp)

Prompt Caching System and regular messages

You can cache normal and system messages by placing cache points strategically:

from llama_index.core.llms import ChatMessage
from llama_index.core.base.llms.types import (
    TextBlock,
    CacheControl,
    CachePoint,
    MessageRole,
)

# Cache expensive context but keep dynamic instructions uncached
cached_context = (
    """[Large context about company policies, knowledge base, etc...]"""
)
dynamic_instructions = (
    "Today's date is 2024-01-15. Focus on recent developments."
)
document_text = "[Long document]"
messages = [
    ChatMessage(
        role=MessageRole.SYSTEM,
        blocks=[
            TextBlock(text=cached_context),
            CachePoint(cache_control=CacheControl(type="default")),
            TextBlock(text=dynamic_instructions),
        ],
    ),
    ChatMessage(
        role=MessageRole.USER,
        blocks=[
            TextBlock(
                text=f"{document_text}",
                type="text",
            ),
            CachePoint(cache_control=CacheControl(type="default")),
            TextBlock(
                text="What's our current policy on remote work?",
                type="text",
            ),
        ],
    ),
]

response = llm.chat(messages)

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/bedrock_converse/

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_bedrock_converse-0.14.8.tar.gz (19.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file llama_index_llms_bedrock_converse-0.14.8.tar.gz.

File metadata

  • Download URL: llama_index_llms_bedrock_converse-0.14.8.tar.gz
  • Upload date:
  • Size: 19.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_bedrock_converse-0.14.8.tar.gz
Algorithm Hash digest
SHA256 8b73a8ea838659902111ac1985a70ab5c67b5dc2ff640c49d58920569d352170
MD5 83061c5c782fd8ad476616c92286b108
BLAKE2b-256 3dcab9d657a470f18bbca324a6bf20439d6caeb874154717f016cb8f43ddb71b

See more details on using hashes here.

File details

Details for the file llama_index_llms_bedrock_converse-0.14.8-py3-none-any.whl.

File metadata

  • Download URL: llama_index_llms_bedrock_converse-0.14.8-py3-none-any.whl
  • Upload date:
  • Size: 19.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_bedrock_converse-0.14.8-py3-none-any.whl
Algorithm Hash digest
SHA256 afe9e6b5b89a3c6e39cb81a318c6cbc58a8a904241fba9b10a13922615b10636
MD5 ec849c44e69bc96fb5d06681e3682818
BLAKE2b-256 c2afb9a340fef60645e1557288b6fff4aceb5fdd3f65eb0aa8d69e808d81beb3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page