Skip to main content

llama-index llms bedrock integration

Project description

LlamaIndex Llms Integration: Bedrock

Installation

%pip install llama-index-llms-bedrock
!pip install llama-index

Basic Usage

from llama_index.llms.bedrock import Bedrock

# Set your AWS profile name
profile_name = "Your aws profile name"

# Simple completion call
resp = Bedrock(
    model="amazon.titan-text-express-v1", profile_name=profile_name
).complete("Paul Graham is ")
print(resp)

# Expected output:
# Paul Graham is a computer scientist and entrepreneur, best known for co-founding
# the Silicon Valley startup incubator Y Combinator. He is also a prominent writer
# and speaker on technology and business topics...

Call chat with a list of messages

from llama_index.core.llms import ChatMessage
from llama_index.llms.bedrock import Bedrock

messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality"
    ),
    ChatMessage(role="user", content="Tell me a story"),
]

resp = Bedrock(
    model="amazon.titan-text-express-v1", profile_name=profile_name
).chat(messages)
print(resp)

# Expected output:
# assistant: Alright, matey! Here's a story for you: Once upon a time, there was a pirate
# named Captain Jack Sparrow who sailed the seas in search of his next adventure...

Streaming

Using stream_complete endpoint

from llama_index.llms.bedrock import Bedrock

llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
    print(r.delta, end="")

# Expected Output (Stream):
# Paul Graham is a computer programmer, entrepreneur, investor, and writer, best known
# for co-founding the internet firm Y Combinator...

Streaming chat

from llama_index.llms.bedrock import Bedrock

llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality"
    ),
    ChatMessage(role="user", content="Tell me a story"),
]
resp = llm.stream_chat(messages)
for r in resp:
    print(r.delta, end="")

# Expected Output (Stream):
# Once upon a time, there was a pirate with a colorful personality who sailed the
# high seas in search of adventure...

Configure Model

from llama_index.llms.bedrock import Bedrock

llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
resp = llm.complete("Paul Graham is ")
print(resp)

# Expected Output:
# Paul Graham is a computer scientist, entrepreneur, investor, and writer. He co-founded
# Viaweb, the first commercial web browser...

Connect to Bedrock with Access Keys

from llama_index.llms.bedrock import Bedrock

llm = Bedrock(
    model="amazon.titan-text-express-v1",
    aws_access_key_id="AWS Access Key ID to use",
    aws_secret_access_key="AWS Secret Access Key to use",
    aws_session_token="AWS Session Token to use",
    region_name="AWS Region to use, e.g. us-east-1",
)

resp = llm.complete("Paul Graham is ")
print(resp)

# Expected Output:
# Paul Graham is an American computer scientist, entrepreneur, investor, and author,
# best known for co-founding Viaweb, the first commercial web browser...

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/bedrock/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_bedrock-0.3.1.tar.gz (9.2 kB view details)

Uploaded Source

Built Distribution

llama_index_llms_bedrock-0.3.1-py3-none-any.whl (9.7 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_bedrock-0.3.1.tar.gz.

File metadata

  • Download URL: llama_index_llms_bedrock-0.3.1.tar.gz
  • Upload date:
  • Size: 9.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.5.0-1025-azure

File hashes

Hashes for llama_index_llms_bedrock-0.3.1.tar.gz
Algorithm Hash digest
SHA256 3f02da765305bf19272ebf0a66e95e4ea5e516782f8ca197c16eb2d3953b3a79
MD5 5b854ec044a63d59a1c4988f9498c904
BLAKE2b-256 ac10b658fc6ea6179cf799f12bdd6ae72420110aacd86172ae8532e13b10df8f

See more details on using hashes here.

File details

Details for the file llama_index_llms_bedrock-0.3.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_bedrock-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 613586b80594d47cef5633bea82fade84ead3ec59a92cc3d8a1ddabb28dd4349
MD5 1cc44ff03fbf5924592f57362ccf8773
BLAKE2b-256 1e978794f0f523e1b2b6a88770ea01f4701e7398fd8a8b94ac0cbe0e3f30dc45

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page